July 19-24, 1998 | Ramesh Raskar, Greg Welch, Matt Cutts, Adam Lake, Lev Stesin, and Henry Fuchs
The paper introduces a vision for the future of office spaces, integrating computer vision and computer graphics to create a unified system that combines CAVE™, tiled display systems, and image-based modeling. The core idea is to use real-time computer vision to extract per-pixel depth and reflectance information from visible surfaces in the office, including walls, furniture, objects, and people. This information is then used to project high-resolution graphics and text onto these surfaces, transmit dynamic image-based models over a network for remote display, or interpret changes in the surfaces for tracking, interaction, or augmented reality applications.
The authors propose using ceiling-mounted cameras and "smart" projectors to capture dynamic image-based models with structured light techniques, and to display high-resolution images on designated display surfaces. They aim to achieve simultaneous capture and display by automatically calibrating for geometric, intensity, and resolution variations caused by irregular or changing display surfaces.
The paper discusses the challenges and current progress in dynamic image-based modeling, rendering, and spatially immersive displays. It presents a two-pass projective texture scheme to generate images that appear correct to a moving head-tracked observer. The authors also describe their current implementation, which includes a working system with projective textures, depth extraction, imperceptible structured light, and initial experiments in intensity blending.
The paper concludes with a discussion of future work, including integrating scene acquisition and display imperceptibly, improving speed and parallelization, and addressing latency issues. The authors aim to make the system more scalable and user-friendly, while maintaining the ability to project onto arbitrary surfaces and handle dynamic environments.The paper introduces a vision for the future of office spaces, integrating computer vision and computer graphics to create a unified system that combines CAVE™, tiled display systems, and image-based modeling. The core idea is to use real-time computer vision to extract per-pixel depth and reflectance information from visible surfaces in the office, including walls, furniture, objects, and people. This information is then used to project high-resolution graphics and text onto these surfaces, transmit dynamic image-based models over a network for remote display, or interpret changes in the surfaces for tracking, interaction, or augmented reality applications.
The authors propose using ceiling-mounted cameras and "smart" projectors to capture dynamic image-based models with structured light techniques, and to display high-resolution images on designated display surfaces. They aim to achieve simultaneous capture and display by automatically calibrating for geometric, intensity, and resolution variations caused by irregular or changing display surfaces.
The paper discusses the challenges and current progress in dynamic image-based modeling, rendering, and spatially immersive displays. It presents a two-pass projective texture scheme to generate images that appear correct to a moving head-tracked observer. The authors also describe their current implementation, which includes a working system with projective textures, depth extraction, imperceptible structured light, and initial experiments in intensity blending.
The paper concludes with a discussion of future work, including integrating scene acquisition and display imperceptibly, improving speed and parallelization, and addressing latency issues. The authors aim to make the system more scalable and user-friendly, while maintaining the ability to project onto arbitrary surfaces and handle dynamic environments.