A FAST ALGORITHM FOR NONLINEARLY CONSTRAINED OPTIMIZATION CALCULATIONS

A FAST ALGORITHM FOR NONLINEARLY CONSTRAINED OPTIMIZATION CALCULATIONS

| M.J.D. Powell
The paper presents an algorithm for solving general constrained optimization problems, combining the advantages of variable metric methods for unconstrained optimization with the fast convergence of Newton's method for nonlinear equations. The algorithm is based on the work of Biggs and Han and is similar to one suggested in Powell's earlier paper. The goal is to find the minimum value of a real function \( F(x) \) subject to constraints \( c_i(x) = 0 \) and \( c_i(x) \geq 0 \). The method requires a positive definite matrix to be revised during the calculation and step-length controls to ensure convergence from poor starting approximations. The algorithm is applied to several examples, showing excellent numerical results and requiring less computational effort compared to other published algorithms. A theoretical analysis of the convergence properties of the method is reported elsewhere. The paper also provides a detailed explanation of variable metric methods for constrained optimization, including the minimization of a quadratic function and the revision of the metric matrix using gradients.The paper presents an algorithm for solving general constrained optimization problems, combining the advantages of variable metric methods for unconstrained optimization with the fast convergence of Newton's method for nonlinear equations. The algorithm is based on the work of Biggs and Han and is similar to one suggested in Powell's earlier paper. The goal is to find the minimum value of a real function \( F(x) \) subject to constraints \( c_i(x) = 0 \) and \( c_i(x) \geq 0 \). The method requires a positive definite matrix to be revised during the calculation and step-length controls to ensure convergence from poor starting approximations. The algorithm is applied to several examples, showing excellent numerical results and requiring less computational effort compared to other published algorithms. A theoretical analysis of the convergence properties of the method is reported elsewhere. The paper also provides a detailed explanation of variable metric methods for constrained optimization, including the minimization of a quadratic function and the revision of the metric matrix using gradients.
Reach us at info@study.space