A FAST ALGORITHM FOR NONLINEARLY CONSTRAINED OPTIMIZATION CALCULATIONS

A FAST ALGORITHM FOR NONLINEARLY CONSTRAINED OPTIMIZATION CALCULATIONS

| M.J.D. Powell
This paper presents an algorithm for solving general constrained optimization problems that combines the advantages of variable metric methods for unconstrained optimization with the fast convergence of Newton's method for solving nonlinear equations. The algorithm is based on the work of Biggs (1975) and Han (1975, 1976), and is very similar to the one suggested in Section 5 of Powell (1976). The main progress since that paper was the increased understanding of the method through calculation and analysis. The algorithm is designed to find the minimum value of a real function F(x), where x is a vector of n real variables, subject to constraints. The objective and constraint functions are assumed to be differentiable, and first derivatives can be calculated. The gradient vector g(x) is defined as the gradient of F(x), and N is the matrix of normals of the active constraints. The algorithm is a "variable metric method for constrained optimization." This method requires a positive definite matrix of dimension n to be revised as the calculation progresses and some step-length controls to ensure convergence from poor starting approximations. Suitable techniques are described in Sections 3 and 4, and the algorithm is applied to some well-known examples in Section 5, yielding excellent numerical results. It is usually found that the algorithm requires less than half the work of other published algorithms for constrained optimization. A theoretical analysis of the convergence properties of the method is reported elsewhere (Powell, 1977). Section 2 explains variable metric methods for constrained optimization. These methods have been successfully used for unconstrained optimization. Each iteration begins with a starting point x, where the gradient is calculated. A positive definite matrix B defines the current metric. The vector d that minimizes the quadratic function Q(d) is calculated and used as a search direction. The matrix B is revised using the gradients g and g*, and another iteration is begun. The method is extended to account for constraints on the variables.This paper presents an algorithm for solving general constrained optimization problems that combines the advantages of variable metric methods for unconstrained optimization with the fast convergence of Newton's method for solving nonlinear equations. The algorithm is based on the work of Biggs (1975) and Han (1975, 1976), and is very similar to the one suggested in Section 5 of Powell (1976). The main progress since that paper was the increased understanding of the method through calculation and analysis. The algorithm is designed to find the minimum value of a real function F(x), where x is a vector of n real variables, subject to constraints. The objective and constraint functions are assumed to be differentiable, and first derivatives can be calculated. The gradient vector g(x) is defined as the gradient of F(x), and N is the matrix of normals of the active constraints. The algorithm is a "variable metric method for constrained optimization." This method requires a positive definite matrix of dimension n to be revised as the calculation progresses and some step-length controls to ensure convergence from poor starting approximations. Suitable techniques are described in Sections 3 and 4, and the algorithm is applied to some well-known examples in Section 5, yielding excellent numerical results. It is usually found that the algorithm requires less than half the work of other published algorithms for constrained optimization. A theoretical analysis of the convergence properties of the method is reported elsewhere (Powell, 1977). Section 2 explains variable metric methods for constrained optimization. These methods have been successfully used for unconstrained optimization. Each iteration begins with a starting point x, where the gradient is calculated. A positive definite matrix B defines the current metric. The vector d that minimizes the quadratic function Q(d) is calculated and used as a search direction. The matrix B is revised using the gradients g and g*, and another iteration is begun. The method is extended to account for constraints on the variables.
Reach us at info@study.space
[slides and audio] A fast algorithm for nonlinearly constrained optimization calculations