30 Sep 2009 | Aram W. Harrow,1 Avinatan Hassidim,2 and Seth Lloyd3
The paper presents a quantum algorithm for solving linear systems of equations, specifically finding an approximation of the expectation value of an operator associated with the solution vector. The algorithm is designed to run in poly($\log N, \kappa$) time, where $N$ is the dimension of the matrix and $\kappa$ is the condition number, an exponential improvement over classical algorithms which require $O(N/\kappa)$ time. The algorithm uses techniques from Hamiltonian simulation and phase estimation to decompose the input vector into the eigenbasis of the matrix and estimate the inverse of the matrix. The runtime is dominated by the phase estimation step, which introduces an error proportional to $1/t_0$, where $t_0$ is chosen to balance the error and the success probability. The algorithm can handle ill-conditioned matrices by flagging and handling the ill-conditioned part separately. The authors prove that their algorithm is optimal, showing that any classical algorithm for matrix inversion would require exponentially more time. They also discuss extensions and applications of the algorithm, including handling non-sparse matrices, measuring other features of the solution vector, and computing functions of the solution vector.The paper presents a quantum algorithm for solving linear systems of equations, specifically finding an approximation of the expectation value of an operator associated with the solution vector. The algorithm is designed to run in poly($\log N, \kappa$) time, where $N$ is the dimension of the matrix and $\kappa$ is the condition number, an exponential improvement over classical algorithms which require $O(N/\kappa)$ time. The algorithm uses techniques from Hamiltonian simulation and phase estimation to decompose the input vector into the eigenbasis of the matrix and estimate the inverse of the matrix. The runtime is dominated by the phase estimation step, which introduces an error proportional to $1/t_0$, where $t_0$ is chosen to balance the error and the success probability. The algorithm can handle ill-conditioned matrices by flagging and handling the ill-conditioned part separately. The authors prove that their algorithm is optimal, showing that any classical algorithm for matrix inversion would require exponentially more time. They also discuss extensions and applications of the algorithm, including handling non-sparse matrices, measuring other features of the solution vector, and computing functions of the solution vector.