October 1994 | Jeffrey A. Fessler, Member, IEEE, and Alfred O. Hero, Member, IEEE
The paper introduces the Space-Alternating Generalized Expectation-Maximization (SAGE) algorithm, which is an extension of the classical EM method. The SAGE algorithm updates parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. This approach addresses two main drawbacks of classical EM: slow convergence and difficult maximization steps due to coupling when smoothness penalties are used. The authors prove that the sequence of estimates generated by the SAGE algorithm monotonically increases the penalized-likelihood objective and derive asymptotic convergence rates. They also provide sufficient conditions for monotone convergence in norm. Two signal processing applications—estimation of superimposed signals in Gaussian noise and image reconstruction from Poisson measurements—are used to illustrate the effectiveness of the SAGE algorithm. In both applications, the SAGE algorithms easily accommodate smoothness penalties and converge faster than classical EM algorithms. The paper includes detailed comparisons of the SAGE and classical EM algorithms, as well as theoretical analyses of convergence and convergence rates.The paper introduces the Space-Alternating Generalized Expectation-Maximization (SAGE) algorithm, which is an extension of the classical EM method. The SAGE algorithm updates parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. This approach addresses two main drawbacks of classical EM: slow convergence and difficult maximization steps due to coupling when smoothness penalties are used. The authors prove that the sequence of estimates generated by the SAGE algorithm monotonically increases the penalized-likelihood objective and derive asymptotic convergence rates. They also provide sufficient conditions for monotone convergence in norm. Two signal processing applications—estimation of superimposed signals in Gaussian noise and image reconstruction from Poisson measurements—are used to illustrate the effectiveness of the SAGE algorithm. In both applications, the SAGE algorithms easily accommodate smoothness penalties and converge faster than classical EM algorithms. The paper includes detailed comparisons of the SAGE and classical EM algorithms, as well as theoretical analyses of convergence and convergence rates.