Creating Full View Panoramic Image Mosaics and Environment Maps

Creating Full View Panoramic Image Mosaics and Environment Maps

August 1995 | Richard Szeliski and Heung-Yeung Shum
This paper presents a novel approach to creating full-view panoramic mosaics from image sequences. Unlike existing panoramic stitching methods that require controlled camera motion, the proposed system allows images to be taken with any motion as long as there is no strong motion parallax. Images from a handheld camera can be seamlessly stitched into panoramic mosaics. The system represents mosaics using transformations, avoiding singularities found in cylindrical or spherical maps. The algorithm is fast and robust, directly recovering 3D rotations instead of general 8-parameter perspective transforms. Methods for recovering camera focal length are also presented. An algorithm for efficiently extracting environment maps from mosaics is introduced. By mapping the mosaic onto a texture-mapped polyhedron, the virtual environment can be explored using standard 3D graphics viewers. The paper discusses cylindrical and spherical panoramas, which require controlled camera motion and have limitations such as handling only pure panning motion and requiring known focal length. To overcome these, the authors propose using 8-parameter perspective transforms, which allow more flexibility but are slower and prone to local minima. The paper then introduces a 3-parameter rotational model, which is more robust and efficient. This model uses 3D rotations to align images, avoiding the need for focal length estimation during alignment. The focal length is estimated separately using perspective transforms. The paper also addresses the problem of accumulated registration errors, which can create gaps or overlaps in the panorama. A "gap closing" technique is introduced to adjust focal length and rotation matrices to minimize these errors. The authors also present an algorithm for constructing environment maps by mapping the mosaic onto a 3D model. This allows the use of standard 3D graphics APIs and hardware for texture mapping. The paper concludes with a discussion of future work, including the extraction of 3D world descriptions from full-view panoramic mosaics.This paper presents a novel approach to creating full-view panoramic mosaics from image sequences. Unlike existing panoramic stitching methods that require controlled camera motion, the proposed system allows images to be taken with any motion as long as there is no strong motion parallax. Images from a handheld camera can be seamlessly stitched into panoramic mosaics. The system represents mosaics using transformations, avoiding singularities found in cylindrical or spherical maps. The algorithm is fast and robust, directly recovering 3D rotations instead of general 8-parameter perspective transforms. Methods for recovering camera focal length are also presented. An algorithm for efficiently extracting environment maps from mosaics is introduced. By mapping the mosaic onto a texture-mapped polyhedron, the virtual environment can be explored using standard 3D graphics viewers. The paper discusses cylindrical and spherical panoramas, which require controlled camera motion and have limitations such as handling only pure panning motion and requiring known focal length. To overcome these, the authors propose using 8-parameter perspective transforms, which allow more flexibility but are slower and prone to local minima. The paper then introduces a 3-parameter rotational model, which is more robust and efficient. This model uses 3D rotations to align images, avoiding the need for focal length estimation during alignment. The focal length is estimated separately using perspective transforms. The paper also addresses the problem of accumulated registration errors, which can create gaps or overlaps in the panorama. A "gap closing" technique is introduced to adjust focal length and rotation matrices to minimize these errors. The authors also present an algorithm for constructing environment maps by mapping the mosaic onto a 3D model. This allows the use of standard 3D graphics APIs and hardware for texture mapping. The paper concludes with a discussion of future work, including the extraction of 3D world descriptions from full-view panoramic mosaics.
Reach us at info@study.space