Automatic Panoramic Image Stitching using Invariant Features

Automatic Panoramic Image Stitching using Invariant Features

| Matthew Brown and David G. Lowe
This paper presents a fully automatic method for panoramic image stitching using invariant features. The approach avoids the need for user input or fixed image ordering, and is robust to changes in rotation, scale, and illumination. It uses invariant local features, such as SIFT, to find matches between images, enabling automatic recognition of multiple panoramas in unordered datasets. The method also includes gain compensation and automatic straightening steps to improve the quality of the final output. The algorithm first extracts SIFT features from all images and matches them using a k-d tree. It then identifies potential matches between images using RANSAC to estimate homographies and a probabilistic model to verify matches. Connected components of image matches are then identified, which correspond to panoramas. Bundle adjustment is used to jointly optimize the parameters of each camera, ensuring accurate alignment. The final panorama is rendered using multi-band blending, which allows for smooth transitions between images while preserving high-frequency details. The method is robust to noise images not part of a panorama and can handle multiple panoramas in an unordered image dataset. It also compensates for changes in brightness between images and corrects for radial distortion, ensuring high-quality stitching. The algorithm is tested on a large dataset of images, demonstrating its effectiveness in stitching panoramas with and without radial distortion. The results show that the method can produce seamless panoramas even in challenging conditions, such as large changes in brightness and motion blur. The system is also capable of handling different types of camera motion and scene changes, making it a versatile solution for automatic panoramic image stitching.This paper presents a fully automatic method for panoramic image stitching using invariant features. The approach avoids the need for user input or fixed image ordering, and is robust to changes in rotation, scale, and illumination. It uses invariant local features, such as SIFT, to find matches between images, enabling automatic recognition of multiple panoramas in unordered datasets. The method also includes gain compensation and automatic straightening steps to improve the quality of the final output. The algorithm first extracts SIFT features from all images and matches them using a k-d tree. It then identifies potential matches between images using RANSAC to estimate homographies and a probabilistic model to verify matches. Connected components of image matches are then identified, which correspond to panoramas. Bundle adjustment is used to jointly optimize the parameters of each camera, ensuring accurate alignment. The final panorama is rendered using multi-band blending, which allows for smooth transitions between images while preserving high-frequency details. The method is robust to noise images not part of a panorama and can handle multiple panoramas in an unordered image dataset. It also compensates for changes in brightness between images and corrects for radial distortion, ensuring high-quality stitching. The algorithm is tested on a large dataset of images, demonstrating its effectiveness in stitching panoramas with and without radial distortion. The results show that the method can produce seamless panoramas even in challenging conditions, such as large changes in brightness and motion blur. The system is also capable of handling different types of camera motion and scene changes, making it a versatile solution for automatic panoramic image stitching.
Reach us at info@study.space