July 2013 | Shutao Li, Member, IEEE, Xudong Kang, Student Member, IEEE, and Jianwen Hu
A fast and effective image fusion method is proposed for creating a highly informative fused image by merging multiple images. The method uses a two-scale decomposition of an image into a base layer containing large-scale intensity variations and a detail layer capturing small-scale details. A novel guided filtering-based weighted average technique is introduced to leverage spatial consistency for fusing the base and detail layers. Experimental results show that the method achieves state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
The method's key contributions include: (1) a fast two-scale fusion method that does not rely on specific image decomposition techniques, (2) a novel weight construction method combining pixel saliency and spatial context, and (3) the ability to control the roles of pixel saliency and spatial consistency through guided filter parameters. The guided filter is used for image fusion, which preserves edges and avoids ringing artifacts. The method is efficient and robust to imperfect conditions like misregistration and noise.
The proposed method is compared with seven existing image fusion algorithms, including Laplacian pyramid, stationary wavelet transform, curvelet transform, nonsubsampled contourlet transform, generalized random walks, wavelet-based statistical sharpness measure, and high-order singular value decomposition. The method outperforms these in terms of objective quality metrics such as normalized mutual information, structural similarity, and feature-based metrics. It is also computationally efficient, with a linear time complexity and low memory consumption.
The method is tested on three image databases, including the Petrović database, multi-focus database, and multi-exposure and multi-modal database. The results show that the method preserves original and complementary information, is robust to image registration, and produces high-quality fused images without artifacts or distortions. The method is also effective for color image sequence fusion and multi-exposure image sequences. The proposed method is efficient, robust, and suitable for real-world applications.A fast and effective image fusion method is proposed for creating a highly informative fused image by merging multiple images. The method uses a two-scale decomposition of an image into a base layer containing large-scale intensity variations and a detail layer capturing small-scale details. A novel guided filtering-based weighted average technique is introduced to leverage spatial consistency for fusing the base and detail layers. Experimental results show that the method achieves state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
The method's key contributions include: (1) a fast two-scale fusion method that does not rely on specific image decomposition techniques, (2) a novel weight construction method combining pixel saliency and spatial context, and (3) the ability to control the roles of pixel saliency and spatial consistency through guided filter parameters. The guided filter is used for image fusion, which preserves edges and avoids ringing artifacts. The method is efficient and robust to imperfect conditions like misregistration and noise.
The proposed method is compared with seven existing image fusion algorithms, including Laplacian pyramid, stationary wavelet transform, curvelet transform, nonsubsampled contourlet transform, generalized random walks, wavelet-based statistical sharpness measure, and high-order singular value decomposition. The method outperforms these in terms of objective quality metrics such as normalized mutual information, structural similarity, and feature-based metrics. It is also computationally efficient, with a linear time complexity and low memory consumption.
The method is tested on three image databases, including the Petrović database, multi-focus database, and multi-exposure and multi-modal database. The results show that the method preserves original and complementary information, is robust to image registration, and produces high-quality fused images without artifacts or distortions. The method is also effective for color image sequence fusion and multi-exposure image sequences. The proposed method is efficient, robust, and suitable for real-world applications.