| Richard A. Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Andrew Fitzgibbon
KinectFusion is a real-time system for dense surface mapping and tracking using a low-cost depth camera and commodity graphics hardware. The system fuses depth data from a Kinect sensor into a global implicit surface model in real-time. It tracks the sensor's pose using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all available depth data. The system produces accurate, real-time 3D reconstructions of complex indoor scenes, with high detail and robustness. It can operate in complete darkness, making it suitable for passive computer vision applications. The system uses GPU-based techniques to achieve real-time performance, allowing both tracking and mapping to occur at the frame rate of the Kinect sensor. The system's ability to track and map in real-time enables applications in augmented reality (AR), where dense surface reconstructions are needed. The system's performance is demonstrated through qualitative and quantitative results, showing its effectiveness in tracking and mapping. The system's use of a truncated signed distance function (TSDF) allows for efficient and accurate surface reconstruction. The system's ability to handle large-scale scenes and dynamic object motion is also demonstrated. The system's performance is evaluated through experiments, showing its ability to produce metrically consistent reconstructions and track scenes with rapid motion. The system's use of a GPU-based implementation enables real-time processing, making it suitable for applications requiring real-time interaction between virtual and real scenes.KinectFusion is a real-time system for dense surface mapping and tracking using a low-cost depth camera and commodity graphics hardware. The system fuses depth data from a Kinect sensor into a global implicit surface model in real-time. It tracks the sensor's pose using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all available depth data. The system produces accurate, real-time 3D reconstructions of complex indoor scenes, with high detail and robustness. It can operate in complete darkness, making it suitable for passive computer vision applications. The system uses GPU-based techniques to achieve real-time performance, allowing both tracking and mapping to occur at the frame rate of the Kinect sensor. The system's ability to track and map in real-time enables applications in augmented reality (AR), where dense surface reconstructions are needed. The system's performance is demonstrated through qualitative and quantitative results, showing its effectiveness in tracking and mapping. The system's use of a truncated signed distance function (TSDF) allows for efficient and accurate surface reconstruction. The system's ability to handle large-scale scenes and dynamic object motion is also demonstrated. The system's performance is evaluated through experiments, showing its ability to produce metrically consistent reconstructions and track scenes with rapid motion. The system's use of a GPU-based implementation enables real-time processing, making it suitable for applications requiring real-time interaction between virtual and real scenes.