1996 | Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, Michael F. Cohen
This paper introduces a novel method for capturing and representing the complete appearance of both synthetic and real-world objects and scenes, known as the Lumigraph. Unlike traditional shape capture and rendering processes, the Lumigraph does not rely on geometric representations but instead samples and reconstructs a 4D function that describes the flow of light at all positions and directions. This function, called a Lumigraph, allows for the quick generation of new images of the object from any camera position, regardless of the complexity of the scene or object.
The paper discusses the system's components, including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images. It details the parameterization and discretization of the 4D Lumigraph, the use of geometric information for depth correction, and the practical implementation issues such as camera calibration, pose estimation, and 3D shape approximation. The system can be used to create Lumigraphs for both synthetic and real scenes, with methods for capturing images from a large number of viewpoints for real objects.
The paper also presents a hierarchical algorithm for constructing the Lumigraph from scattered data, addressing the challenges of non-uniform sampling density and sparse data. Additionally, it discusses compression techniques to reduce storage requirements and methods for reconstructing images using texture mapping operations, which significantly reduce the computational cost of rendering.
Finally, the paper provides results demonstrating the effectiveness of the Lumigraph system in generating high-quality images from synthetic and real scenes, highlighting its potential for applications in virtual environments and computer graphics.This paper introduces a novel method for capturing and representing the complete appearance of both synthetic and real-world objects and scenes, known as the Lumigraph. Unlike traditional shape capture and rendering processes, the Lumigraph does not rely on geometric representations but instead samples and reconstructs a 4D function that describes the flow of light at all positions and directions. This function, called a Lumigraph, allows for the quick generation of new images of the object from any camera position, regardless of the complexity of the scene or object.
The paper discusses the system's components, including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images. It details the parameterization and discretization of the 4D Lumigraph, the use of geometric information for depth correction, and the practical implementation issues such as camera calibration, pose estimation, and 3D shape approximation. The system can be used to create Lumigraphs for both synthetic and real scenes, with methods for capturing images from a large number of viewpoints for real objects.
The paper also presents a hierarchical algorithm for constructing the Lumigraph from scattered data, addressing the challenges of non-uniform sampling density and sparse data. Additionally, it discusses compression techniques to reduce storage requirements and methods for reconstructing images using texture mapping operations, which significantly reduce the computational cost of rendering.
Finally, the paper provides results demonstrating the effectiveness of the Lumigraph system in generating high-quality images from synthetic and real scenes, highlighting its potential for applications in virtual environments and computer graphics.