Gauss Newton for multivariable nonlinear least squares regression

by Trent Guidry9. August 2009 12:38
The Gauss Newton algorithm is a numerical method that in some cases can be used to solve nonlinear least squares regression problems.
 
Functions used in a nonlinear regression have three types of variables. These three variables are the dependent variables, the independent variables, and the regression parameters.
The independent variables are usually the variables which are used as inputs while measuring the effect of them on other variables. For example, with an equation of state, if the pressure is measured as a function of temperature and volume, then the temperature and volume are independent variables. The independent variables are denoted as x in this post.
The dependent variables are the observed outcomes of varying the independent variables. For the equation of state example given above, the pressure would be a dependent variable. The dependent variables are denoted as y in this post.
Both the independent variables and the dependent variables are model independent. They both can be determined experimentally without using a model.
A model is one or more equations that are used to correlate the dependent variables with respect to the independent variables.  For the equation of state example given above, the model would be the equation of state used to correlate the dependent variable (pressure) with respect to the independent variables (temperature and volume).  For this case, example model equations could include models such as an ideal gas model, a Van der Waals model, a Peng Robinson model, a Soave Redlich Kwon model, etc.    The model functions are denoted as f in this post.
The regression parameters are parameters of the model used to correlate the dependent and independent variables and are model dependent. The regression parameters are the unknown variables to be solved for in a nonlinear regression. Regression parameters are denoted as p in this post.
In a model, the dependent variables are functions of the independent variables and the model parameters. The difference between the values of the dependent variables predicted by the model and the values of the dependent variables actually observed is the error of the model, which is donated as e in this post. This is shown in the equation below.
y = f(x,p) + e
This can be written in matrix form as shown below.
Y = F + E
This can be rearranged to the equation shown below.
E = Y - F
A Taylor series for E written about a set of regression parameters while holding the dependent and independent variables constant gives.
Regression Error Jacobian
 
Ep1 = Ep0 + JE(P1 – P0)
Where JE is the Jacobian of the error functions with respect to the parameters.
Using E = Y – F and noting that the Y column matrix is constant with respect to the parameters gives
JE = -J
Where J is the Jacobian of the model functions with respect to the parameters. Substituting in gives.
Ep1 = Ep0 - J(P1 – P0)
The square residual error S­r about P1 can be written as shown below.
Sr = E p12 = (Ep0- J(P1 – P0))2 = (Ep0- J(P1 – P0))T(Ep0- J(P1 – P0)) = Ep0T Ep0 + (P1 – P0)T(J T J) (P1 – P0) – 2 Ep0T J(P1 – P0)
At the minimum square residual error, the derivatives of Sr with respect to P1-P0 are equal to zero.
dSR/d(P1-P0) = 2 J TJ(P1 – P0) – 2 J T Ep0 = 0
Which gives the Gauss Newton equation that is shown below.
Gauss Newton
The Gauss Newton algorithm works by starting with an initial guess of the parameters. These are used as P0. The residual error and the Jacobian are then solved for. The equation above is then used to solve for P1, which becomes the new P0 and the algorithm is repeated until it either converges or goes unstable.
The Gauss Newton algorithm is implemented in the GaussNewton class, which is shown at the bottom of the post.
There are two constructors for the GaussNewton class. The first takes an array of parameters to be regressed, a parameter array of the independent variables, a double array of the data, and an integer that indicates the number of points to use when numerically computing the derivatives in the Jacobian. The second constructor is similar to the first except that it defaults the number of parameters to use when computing the derivatives to 3. The constructor for this class allocates memory for the Jacobian, residual error, and initial regression parameter values that are stored in Matrix fields. It also sets the IsSolvedFor property of the independent variables to false, since a variable cannot be both an independent variable and a regression parameter at the same time. It also creates the derivatives class instance that will be used to compute the Jacobian.
The Iterate function invokes an iteration of the Gauss Newton algorithm. This function first iterates through the data points and computes the Jacobian matrix using numerical partial derivatives. The Gauss Newton equation is then used to solve for the next set of parameters and the model parameter values are then set to these values.
 
 
Multiple Linear Regression
Mulitple Linear Regression Jacobian
Gauss Newton
namespace NumericalMethods
{public class GaussNewton{private Matrix _jacobian;private Matrix _residuals;private Matrix _regressionParameters0;private Derivatives _derivatives;private Parameter[] _regressionParameters;private Parameter[] _observedParameters;private Func<double> _regressionFunction;private double[,] _data;public GaussNewton(Func<double> regressionFunction, Parameter[] regressionParameters, Parameter[] observedParameters, double[,] data, int numberOfDerivativePoints){Debug.Assert(data.GetLength(0) == observedParameters.Length + 1);_data = data;_observedParameters = observedParameters;_regressionParameters = regressionParameters;_regressionFunction = regressionFunction;int numberOfParameters = _regressionParameters.Length;int numberOfPoints = data.GetLength(1);_derivatives = new Derivatives(numberOfDerivativePoints);_jacobian = new Matrix(numberOfPoints, numberOfParameters);_residuals = new Matrix(numberOfPoints, 1);_regressionParameters0 = new Matrix(numberOfParameters, 1);}public GaussNewton(Func<double> function, Parameter[] regressionParameters, Parameter[] observedParameters, double[,] data) :this(function, regressionParameters, observedParameters, data, 3){}public void Iterate(){int numberOfPoints = _data.GetLength(1);int numberOfParameters = _regressionParameters.Length;double currentResidual = 0.0;for (int i = 0; i < numberOfPoints; i++){for (int j = 0; j < _observedParameters.Length; j++){_observedParameters[j].Value = _data[j, i];}double functionValue = _regressionFunction();double residual = _data[_observedParameters.Length, i] - functionValue;_residuals[i, 0] = residual;currentResidual += residual * residual;for (int j = 0; j < numberOfParameters; j++){_jacobian[i, j] = _derivatives.ComputePartialDerivative(_regressionFunction, _regressionParameters[j], 1, functionValue);}}for (int i = 0; i < numberOfParameters; i++){_regressionParameters0[i, 0] = _regressionParameters[i];}Matrix matJacobianTrans = _jacobian.Transpose();Matrix matJTranResid = matJacobianTrans * _residuals;Matrix matJTranJ = matJacobianTrans * _jacobian;Matrix matDelta = matJTranJ.SolveFor(matJTranResid); ;Matrix matNewRegPars = _regressionParameters0 + matDelta;for (int i = 0; i < numberOfParameters; i++){_regressionParameters[i].Value = matNewRegPars[i, 0];}}}
}

Comments (2) -

jan mojzis
jan mojzisSlovakia
4/24/2010 7:39:20 AM #

Hi, thanx for the algorithm. I was looking for it quite a while. However, I would be very grateful, if you can post/upload all neccessary classes (eg. MatrixNumeric , ...). Please can you post/upload all required? Or the algo is rather pseudo, and there doesn't exist these?

Thanx,
Jan

Trent
TrentUnited States
4/25/2010 2:02:34 PM #

I posted the source code for the numeric methods at www.trentfguidry.net/.../...gression-Download.aspx