August 12-17, 2001 | Buehler, Chris, Michael Bosse, Leonard McMillan, Steven J. Gortler, and Michael Cohen
The paper presents "unstructured lumigraph rendering" (ULR), a new image-based rendering approach that generalizes both lumigraph and view-dependent texture mapping (VDTM) techniques. ULR is designed to meet a set of desirable goals for image-based rendering, including the use of geometric proxies, epipole consistency, resolution sensitivity, unstructured input, equivalent ray consistency, continuity, minimal angular deviation, and real-time performance. The algorithm is capable of handling a wide variety of input configurations, including cameras not restricted to a common plane or manifold. It achieves flexibility by adapting to different numbers of input images and varying levels of geometric accuracy.
The ULR algorithm works by evaluating a "camera blending field" at a set of vertices in the desired image plane and interpolating this field over the whole image. This blending field determines how each source camera is weighted to reconstruct a given pixel. The algorithm considers factors such as angular difference, undersampling, and field of view to compute the blending weights. It also incorporates visibility and field of view constraints to ensure that only relevant source cameras are used for reconstruction.
The algorithm is implemented in real-time by triangulating a set of points in the image plane and interpolating the camera blending over the image. This allows for efficient rendering of views from an unstructured collection of input images. The algorithm is tested on various datasets, including a pond, robot, helicopter, knick-knacks, car, and hallway, demonstrating its ability to handle different input configurations and produce high-quality renderings.
The results show that ULR can produce convincing new images from unstructured input collections, and it is efficient enough to run in real-time. The algorithm is a generalization of both lumigraph and VDTM rendering techniques, allowing for unstructured sets of cameras and variable information about scene geometry. It offers the benefits of real-time structured lumigraph rendering, including speed and photorealistic quality, while also supporting geometric proxies, unstructured input cameras, and variations in resolution and field-of-view.The paper presents "unstructured lumigraph rendering" (ULR), a new image-based rendering approach that generalizes both lumigraph and view-dependent texture mapping (VDTM) techniques. ULR is designed to meet a set of desirable goals for image-based rendering, including the use of geometric proxies, epipole consistency, resolution sensitivity, unstructured input, equivalent ray consistency, continuity, minimal angular deviation, and real-time performance. The algorithm is capable of handling a wide variety of input configurations, including cameras not restricted to a common plane or manifold. It achieves flexibility by adapting to different numbers of input images and varying levels of geometric accuracy.
The ULR algorithm works by evaluating a "camera blending field" at a set of vertices in the desired image plane and interpolating this field over the whole image. This blending field determines how each source camera is weighted to reconstruct a given pixel. The algorithm considers factors such as angular difference, undersampling, and field of view to compute the blending weights. It also incorporates visibility and field of view constraints to ensure that only relevant source cameras are used for reconstruction.
The algorithm is implemented in real-time by triangulating a set of points in the image plane and interpolating the camera blending over the image. This allows for efficient rendering of views from an unstructured collection of input images. The algorithm is tested on various datasets, including a pond, robot, helicopter, knick-knacks, car, and hallway, demonstrating its ability to handle different input configurations and produce high-quality renderings.
The results show that ULR can produce convincing new images from unstructured input collections, and it is efficient enough to run in real-time. The algorithm is a generalization of both lumigraph and VDTM rendering techniques, allowing for unstructured sets of cameras and variable information about scene geometry. It offers the benefits of real-time structured lumigraph rendering, including speed and photorealistic quality, while also supporting geometric proxies, unstructured input cameras, and variations in resolution and field-of-view.