The conjugate gradient method is widely used for solving large-scale numerical problems, including linear and nonlinear equations, eigenvalue problems, and minimization problems. However, traditional methods that require storing full matrices are inapplicable for large systems. For linear equations, the coefficient matrix must be symmetric and positive definite, but many problems involve indefinite matrices. This paper reviews modifications of the conjugate gradient method for symmetric indefinite problems. One approach is the hyperbolic pairs method, but it is not fully satisfactory. Another method minimizes the sum of squares of residuals, with one variant preferred for indefinite systems. This method is related to the biconjugate gradient method. Another algorithm derived from the biconjugate gradient method is suitable for indefinite systems. The paper also discusses Paige and Saunders' method, which uses orthogonal reduction of tridiagonal matrices but has complex formulas. It is shown to be equivalent to the biconjugate gradient method. The paper concludes that methods derived from biconjugate gradients are currently the most promising for solving indefinite systems. The conjugate gradient method is applied to solving Ax = b, where A is symmetric. It requires A to be positive definite for well-definedness, except for round-off errors. The method uses recurrence relations, with x₁ and r₁ defined initially. The algorithm involves computing x_{k+1} = x_k + α_k p_k and r_{k+1} = r_k - α_k A p_k, where α_k is determined by the formula α_k = r_k^T r_k / p_k^T A p_k. The method is described by Hestenes and Stiefel.The conjugate gradient method is widely used for solving large-scale numerical problems, including linear and nonlinear equations, eigenvalue problems, and minimization problems. However, traditional methods that require storing full matrices are inapplicable for large systems. For linear equations, the coefficient matrix must be symmetric and positive definite, but many problems involve indefinite matrices. This paper reviews modifications of the conjugate gradient method for symmetric indefinite problems. One approach is the hyperbolic pairs method, but it is not fully satisfactory. Another method minimizes the sum of squares of residuals, with one variant preferred for indefinite systems. This method is related to the biconjugate gradient method. Another algorithm derived from the biconjugate gradient method is suitable for indefinite systems. The paper also discusses Paige and Saunders' method, which uses orthogonal reduction of tridiagonal matrices but has complex formulas. It is shown to be equivalent to the biconjugate gradient method. The paper concludes that methods derived from biconjugate gradients are currently the most promising for solving indefinite systems. The conjugate gradient method is applied to solving Ax = b, where A is symmetric. It requires A to be positive definite for well-definedness, except for round-off errors. The method uses recurrence relations, with x₁ and r₁ defined initially. The algorithm involves computing x_{k+1} = x_k + α_k p_k and r_{k+1} = r_k - α_k A p_k, where α_k is determined by the formula α_k = r_k^T r_k / p_k^T A p_k. The method is described by Hestenes and Stiefel.