A General Framework for Multiresolution Image Fusion: from Pixels to Regions

A General Framework for Multiresolution Image Fusion: from Pixels to Regions

MAY 31, 2002 | G. Piella
This paper presents an overview of image fusion techniques using multiresolution decompositions. The goal is to provide a common formalism for multiresolution-based fusion and develop a new region-based approach that combines aspects of both object and pixel-level fusion. The paper first introduces a general framework that encompasses most existing multiresolution-based fusion schemes, allowing for the creation of new ones. It then extends this framework to enable region-based fusion. The basic idea is to perform multiresolution segmentation based on all input images and use this segmentation to guide the fusion process. Performance assessment and future directions are also discussed. Image fusion integrates multiple images into a composite image that is more suitable for human visual perception and computer processing tasks such as segmentation, feature extraction, and target recognition. It is particularly useful in applications like defense surveillance, remote sensing, medical imaging, robotics, and industrial engineering. The paper discusses various fusion techniques, including weighted combination, color space fusion, optimization approaches, and biologically-based methods. It also covers different multiresolution decomposition schemes, such as pyramids and wavelets, and their applications in image fusion. The paper concludes with a discussion of performance assessment and future research directions in the field of image fusion.This paper presents an overview of image fusion techniques using multiresolution decompositions. The goal is to provide a common formalism for multiresolution-based fusion and develop a new region-based approach that combines aspects of both object and pixel-level fusion. The paper first introduces a general framework that encompasses most existing multiresolution-based fusion schemes, allowing for the creation of new ones. It then extends this framework to enable region-based fusion. The basic idea is to perform multiresolution segmentation based on all input images and use this segmentation to guide the fusion process. Performance assessment and future directions are also discussed. Image fusion integrates multiple images into a composite image that is more suitable for human visual perception and computer processing tasks such as segmentation, feature extraction, and target recognition. It is particularly useful in applications like defense surveillance, remote sensing, medical imaging, robotics, and industrial engineering. The paper discusses various fusion techniques, including weighted combination, color space fusion, optimization approaches, and biologically-based methods. It also covers different multiresolution decomposition schemes, such as pyramids and wavelets, and their applications in image fusion. The paper concludes with a discussion of performance assessment and future research directions in the field of image fusion.
Reach us at info@study.space
Understanding A general framework for multiresolution image fusion%3A from pixels to regions