The Levenberg-Marquardt algorithm is a method for solving nonlinear least squares problems. The algorithm was proposed by Levenberg in 1944 and Marquardt in 1963. However, most implementations are not robust or lack theoretical justification. This paper presents a robust and efficient implementation of the algorithm with strong convergence properties. The implementation uses implicitly scaled variables and a parameter selection scheme based on Hebden's method. Numerical results are provided to illustrate the algorithm's behavior.
The algorithm is derived by linearizing the problem and solving a constrained linear least squares problem. The solution involves finding a vector p that minimizes the norm of the function F(x + p), subject to a constraint on the norm of Dp. The solution is given by p(λ) = - (J^T J + λD^T D)^{-1} J^T f, where λ is a parameter. If J is rank deficient, the solution is defined by a limiting process.
The algorithm is implemented in an iterative manner. At each iteration, a parameter λ is chosen such that the solution p satisfies the constraint. If the new solution improves the function value, the new point is accepted; otherwise, the previous point is retained. The parameter Δ and the scaling matrix D are updated at each iteration.
The algorithm is robust and efficient, as it handles both well-conditioned and ill-conditioned problems. The use of implicitly scaled variables and the parameter selection scheme ensures that the algorithm converges to a local minimum. The algorithm is suitable for a wide range of applications, including curve fitting and parameter estimation.The Levenberg-Marquardt algorithm is a method for solving nonlinear least squares problems. The algorithm was proposed by Levenberg in 1944 and Marquardt in 1963. However, most implementations are not robust or lack theoretical justification. This paper presents a robust and efficient implementation of the algorithm with strong convergence properties. The implementation uses implicitly scaled variables and a parameter selection scheme based on Hebden's method. Numerical results are provided to illustrate the algorithm's behavior.
The algorithm is derived by linearizing the problem and solving a constrained linear least squares problem. The solution involves finding a vector p that minimizes the norm of the function F(x + p), subject to a constraint on the norm of Dp. The solution is given by p(λ) = - (J^T J + λD^T D)^{-1} J^T f, where λ is a parameter. If J is rank deficient, the solution is defined by a limiting process.
The algorithm is implemented in an iterative manner. At each iteration, a parameter λ is chosen such that the solution p satisfies the constraint. If the new solution improves the function value, the new point is accepted; otherwise, the previous point is retained. The parameter Δ and the scaling matrix D are updated at each iteration.
The algorithm is robust and efficient, as it handles both well-conditioned and ill-conditioned problems. The use of implicitly scaled variables and the parameter selection scheme ensures that the algorithm converges to a local minimum. The algorithm is suitable for a wide range of applications, including curve fitting and parameter estimation.