March 1982 | CHRISTOPHER C. PAIGE and MICHAEL A. SAUNDERS
LSQR is an iterative algorithm for solving large, sparse linear equations and least squares problems. It is based on the bidiagonalization procedure of Golub and Kahan and is analytically equivalent to the conjugate gradient method but has better numerical properties. The algorithm includes reliable stopping criteria and estimates for standard errors and the condition number of the matrix A. It is implemented in FORTRAN and has been tested against other conjugate-gradient algorithms, showing superior reliability for ill-conditioned systems.
The algorithm is derived from the Lanczos process applied to symmetric systems, leading to bidiagonalization procedures. These procedures generate sequences of vectors and scalars that help approximate solutions. LSQR uses these sequences to iteratively solve the least-squares problem by minimizing the residual norm.
The algorithm generates estimates for norms such as the residual norm, the norm of A, and the condition number of A. These estimates are used to determine stopping criteria and to assess the reliability of the solution. LSQR also provides estimates for standard errors in the solution, which are useful for regression problems.
Stopping criteria for LSQR are based on three dimensionless quantities: ATOL, BTOL, and CONLIM. These criteria help determine when the current iterate is an acceptable approximation to the true solution. The criteria are based on allowable perturbations in the data and the condition number of the matrix A.
LSQR is particularly effective for ill-conditioned systems, where other methods may fail. It is also useful for singular systems, where the algorithm can detect rank deficiency and provide a solution that converges to an acceptable result. The algorithm is implemented in a way that minimizes computational cost and is efficient for large, sparse matrices.
Other conjugate-gradient methods, such as CGLS, LSCG, and LSLQ, are also discussed. These methods are based on different bidiagonalization procedures and have been compared to LSQR. While they share some similarities, LSQR is generally more reliable, especially for ill-conditioned systems. The algorithm has been tested on various problems, including those from analysis of variance and gravity-meter problems, demonstrating its effectiveness and robustness.LSQR is an iterative algorithm for solving large, sparse linear equations and least squares problems. It is based on the bidiagonalization procedure of Golub and Kahan and is analytically equivalent to the conjugate gradient method but has better numerical properties. The algorithm includes reliable stopping criteria and estimates for standard errors and the condition number of the matrix A. It is implemented in FORTRAN and has been tested against other conjugate-gradient algorithms, showing superior reliability for ill-conditioned systems.
The algorithm is derived from the Lanczos process applied to symmetric systems, leading to bidiagonalization procedures. These procedures generate sequences of vectors and scalars that help approximate solutions. LSQR uses these sequences to iteratively solve the least-squares problem by minimizing the residual norm.
The algorithm generates estimates for norms such as the residual norm, the norm of A, and the condition number of A. These estimates are used to determine stopping criteria and to assess the reliability of the solution. LSQR also provides estimates for standard errors in the solution, which are useful for regression problems.
Stopping criteria for LSQR are based on three dimensionless quantities: ATOL, BTOL, and CONLIM. These criteria help determine when the current iterate is an acceptable approximation to the true solution. The criteria are based on allowable perturbations in the data and the condition number of the matrix A.
LSQR is particularly effective for ill-conditioned systems, where other methods may fail. It is also useful for singular systems, where the algorithm can detect rank deficiency and provide a solution that converges to an acceptable result. The algorithm is implemented in a way that minimizes computational cost and is efficient for large, sparse matrices.
Other conjugate-gradient methods, such as CGLS, LSCG, and LSLQ, are also discussed. These methods are based on different bidiagonalization procedures and have been compared to LSQR. While they share some similarities, LSQR is generally more reliable, especially for ill-conditioned systems. The algorithm has been tested on various problems, including those from analysis of variance and gravity-meter problems, demonstrating its effectiveness and robustness.