Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation

Fast Volume Rendering Using a Shear-Warp Factorization of the Viewing Transformation

1994 | Philippe Lacroute, Marc Levoy
This paper presents a new algorithm for fast volume rendering using a shear-warp factorization of the viewing transformation. The algorithm extends existing methods by introducing three key improvements: a new object-order rendering algorithm that is significantly faster with minimal image quality loss, a shear-warp factorization for perspective viewing, and a data structure for encoding spatial coherence in unclassified volumes. The algorithm is efficient, parallelizable, and supports both classified and unclassified volumes, as well as mixed volumes and geometry. The shear-warp factorization decomposes the viewing transformation into three components: a 3D shear parallel to the volume slices, a projection to form an intermediate image, and a 2D warp to produce the final image. This factorization allows for efficient, synchronized access to data structures that encode spatial coherence in both the volume and the image. The algorithm uses run-length encoding for both the volume and the intermediate image, and it avoids the need for complex resampling steps by leveraging the alignment of scanlines in the volume and the intermediate image. The algorithm is particularly efficient for parallel projections, where each slice of the volume is translated and resampled. For perspective projections, the slices are also scaled. The algorithm uses a bilinear interpolation filter and a gather-type convolution for resampling, and it employs a lookup table for shading and opacity correction. The algorithm can render a 256³ voxel medical data set in one second, which is at least five times faster than previous algorithms. The algorithm also includes a fast classification algorithm for unclassified volumes, which avoids the need for preprocessing by evaluating the opacity transfer function during rendering. This algorithm uses a min-max octree and a summed-area table to determine which portions of a scanline are non-transparent. The algorithm is efficient and can render unclassified volumes in three seconds. The algorithm's performance is evaluated on an SGI Indigo workstation, and it is shown to be competitive with algorithms for massively parallel processors. The algorithm is flexible, supports a wide range of shading models, and can handle both parallel and perspective projections. The algorithm is also parallelizable and can be implemented on MIMD shared-memory multiprocessors. The results show that the algorithm is efficient and produces high-quality images with minimal computational overhead.This paper presents a new algorithm for fast volume rendering using a shear-warp factorization of the viewing transformation. The algorithm extends existing methods by introducing three key improvements: a new object-order rendering algorithm that is significantly faster with minimal image quality loss, a shear-warp factorization for perspective viewing, and a data structure for encoding spatial coherence in unclassified volumes. The algorithm is efficient, parallelizable, and supports both classified and unclassified volumes, as well as mixed volumes and geometry. The shear-warp factorization decomposes the viewing transformation into three components: a 3D shear parallel to the volume slices, a projection to form an intermediate image, and a 2D warp to produce the final image. This factorization allows for efficient, synchronized access to data structures that encode spatial coherence in both the volume and the image. The algorithm uses run-length encoding for both the volume and the intermediate image, and it avoids the need for complex resampling steps by leveraging the alignment of scanlines in the volume and the intermediate image. The algorithm is particularly efficient for parallel projections, where each slice of the volume is translated and resampled. For perspective projections, the slices are also scaled. The algorithm uses a bilinear interpolation filter and a gather-type convolution for resampling, and it employs a lookup table for shading and opacity correction. The algorithm can render a 256³ voxel medical data set in one second, which is at least five times faster than previous algorithms. The algorithm also includes a fast classification algorithm for unclassified volumes, which avoids the need for preprocessing by evaluating the opacity transfer function during rendering. This algorithm uses a min-max octree and a summed-area table to determine which portions of a scanline are non-transparent. The algorithm is efficient and can render unclassified volumes in three seconds. The algorithm's performance is evaluated on an SGI Indigo workstation, and it is shown to be competitive with algorithms for massively parallel processors. The algorithm is flexible, supports a wide range of shading models, and can handle both parallel and perspective projections. The algorithm is also parallelizable and can be implemented on MIMD shared-memory multiprocessors. The results show that the algorithm is efficient and produces high-quality images with minimal computational overhead.
Reach us at info@study.space