Space-Alternating Generalized Expectation-Maximization Algorithm

Space-Alternating Generalized Expectation-Maximization Algorithm

October 1994 | Jeffrey A. Fessler, Member, IEEE, and Alfred O. Hero, Member, IEEE
The Space-Alternating Generalized Expectation-Maximization (SAGE) algorithm is a method for maximizing penalized-likelihood functions in statistical estimation problems. Unlike the classical Expectation-Maximization (EM) algorithm, which updates all parameters simultaneously and can be slow to converge, SAGE updates parameters sequentially by alternating between several small hidden-data spaces. This approach allows for faster convergence and better handling of smoothness penalties. The SAGE algorithm is proven to monotonically increase the penalized-likelihood objective and has asymptotic convergence rates that depend on the Fisher information of the hidden-data spaces. The algorithm is applied to two signal processing problems: estimation of superimposed signals in Gaussian noise and image reconstruction from Poisson measurements. In both cases, SAGE algorithms converge faster than EM algorithms and can accommodate smoothness penalties more effectively. The SAGE method is general and can be applied to a wide range of problems, offering a balance between convergence rate and computational complexity. The algorithm's convergence is guaranteed by its monotonicity properties, which ensure that the objective function increases with each iteration. The SAGE method is particularly useful for large parameter spaces, where it can decouple parameter updates and improve convergence rates. The algorithm is implemented by defining hidden-data spaces for each parameter subset and using these spaces to compute the conditional expectations and maximizations required for the E-step and M-step. The SAGE algorithm is shown to converge faster than EM algorithms, especially when using less informative hidden-data spaces. The method is also compared to other iterative methods, such as the coordinate-wise Newton-Raphson method, and is found to be more efficient in terms of convergence rate and computational complexity. The SAGE algorithm is a powerful tool for statistical estimation problems, offering a flexible and efficient approach to maximizing penalized-likelihood functions.The Space-Alternating Generalized Expectation-Maximization (SAGE) algorithm is a method for maximizing penalized-likelihood functions in statistical estimation problems. Unlike the classical Expectation-Maximization (EM) algorithm, which updates all parameters simultaneously and can be slow to converge, SAGE updates parameters sequentially by alternating between several small hidden-data spaces. This approach allows for faster convergence and better handling of smoothness penalties. The SAGE algorithm is proven to monotonically increase the penalized-likelihood objective and has asymptotic convergence rates that depend on the Fisher information of the hidden-data spaces. The algorithm is applied to two signal processing problems: estimation of superimposed signals in Gaussian noise and image reconstruction from Poisson measurements. In both cases, SAGE algorithms converge faster than EM algorithms and can accommodate smoothness penalties more effectively. The SAGE method is general and can be applied to a wide range of problems, offering a balance between convergence rate and computational complexity. The algorithm's convergence is guaranteed by its monotonicity properties, which ensure that the objective function increases with each iteration. The SAGE method is particularly useful for large parameter spaces, where it can decouple parameter updates and improve convergence rates. The algorithm is implemented by defining hidden-data spaces for each parameter subset and using these spaces to compute the conditional expectations and maximizations required for the E-step and M-step. The SAGE algorithm is shown to converge faster than EM algorithms, especially when using less informative hidden-data spaces. The method is also compared to other iterative methods, such as the coordinate-wise Newton-Raphson method, and is found to be more efficient in terms of convergence rate and computational complexity. The SAGE algorithm is a powerful tool for statistical estimation problems, offering a flexible and efficient approach to maximizing penalized-likelihood functions.
Reach us at info@study.space
Understanding Space-alternating generalized expectation-maximization algorithm