June 30, 1969, revised August 4, 1969 | Donald Goldfarb
A new rank-two variable-metric method is derived using Greenstadt's variational approach. This method preserves the positive-definiteness of the approximating matrix and forms a one-parameter family of variable-metric methods that includes the DFP and rank-one methods as special cases. It is equivalent to Broyden's one-parameter family. The method involves solving a constrained minimization problem to find the correction term that minimizes a weighted Euclidean norm. The derived correction terms are similar to the DFP and rank-one correction terms. These correction terms lead to algorithms that find the exact minimum of a strictly convex quadratic function in N steps, with the resulting matrix H equal to the true inverse Hessian. The correction terms also preserve the positive-definiteness of the approximating matrix, ensuring algorithm stability. The correction terms can be expressed as linear combinations of each other, forming a one-parameter family of correction terms. This family includes all symmetric variable-metric correction terms published. The method is equivalent to Broyden's algorithm, and the correction terms can be derived from different choices of the weighting matrix. The paper also discusses the implications of these results for the convergence and stability of variable-metric algorithms.A new rank-two variable-metric method is derived using Greenstadt's variational approach. This method preserves the positive-definiteness of the approximating matrix and forms a one-parameter family of variable-metric methods that includes the DFP and rank-one methods as special cases. It is equivalent to Broyden's one-parameter family. The method involves solving a constrained minimization problem to find the correction term that minimizes a weighted Euclidean norm. The derived correction terms are similar to the DFP and rank-one correction terms. These correction terms lead to algorithms that find the exact minimum of a strictly convex quadratic function in N steps, with the resulting matrix H equal to the true inverse Hessian. The correction terms also preserve the positive-definiteness of the approximating matrix, ensuring algorithm stability. The correction terms can be expressed as linear combinations of each other, forming a one-parameter family of correction terms. This family includes all symmetric variable-metric correction terms published. The method is equivalent to Broyden's algorithm, and the correction terms can be derived from different choices of the weighting matrix. The paper also discusses the implications of these results for the convergence and stability of variable-metric algorithms.