A Morphable Model For The Synthesis Of 3D Faces

A Morphable Model For The Synthesis Of 3D Faces

| Volker Blanz, Thomas Vetter
This paper introduces a new technique for modeling textured 3D faces. The method allows for the automatic generation of 3D faces from one or more photographs or direct modeling through an intuitive user interface. It addresses two key challenges in computer-aided face modeling: automatic registration of new face images or models to an internal face model, and ensuring the naturalness of modeled faces by avoiding unlikely appearances. Starting from a set of 3D face models, a morphable face model is derived by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of the example faces are used to guide manual modeling or automated matching algorithms. The paper demonstrates 3D face reconstructions from single images and their applications for photo-realistic image manipulations. It also shows face manipulations according to complex parameters such as gender, fullness of a face, or its distinctiveness. The morphable face model is a multidimensional 3D morphing function based on the linear combination of a large number of 3D face scans. By computing the average face and the main modes of variation in the dataset, a probability distribution is imposed on the morphing function to avoid unlikely faces. Parametric descriptions of face attributes such as gender, distinctiveness, "hooked" noses, or the weight of a person are derived by evaluating the distribution of exemplar faces for each attribute within the face space. The paper also describes an algorithm for matching the morphable model to novel images or 3D scans of faces. The algorithm can compute correspondence based on the morphable model. An iterative method is introduced for building a morphable model automatically from a raw dataset of 3D face scans when no correspondences between the exemplar faces are available. The database consists of 200 laser scans of young adults (100 male and 100 female). The scans provide head structure data in a cylindrical representation, with radii r(h,φ) of surface points sampled at 512 equally-spaced angles φ and 512 equally spaced vertical steps h. Additionally, RGB-color values R(h,φ), G(h,φ), and B(h,φ) were recorded in the same spatial resolution and stored in a texture map with 8 bits per channel. The morphable model is based on a dataset of 3D faces. Morphing between faces requires full correspondence between all of the faces. The model is defined by shape and texture vectors that contain the X, Y, Z coordinates of its vertices and the R, G, B color values of the corresponding vertices. The model is constructed using a dataset of m exemplar faces, each represented by its shape-vector S_i and texture-vector T_i. New shapes and textures can be expressed as linear combinations of the shapes and textures of the m exemplar faces. A multThis paper introduces a new technique for modeling textured 3D faces. The method allows for the automatic generation of 3D faces from one or more photographs or direct modeling through an intuitive user interface. It addresses two key challenges in computer-aided face modeling: automatic registration of new face images or models to an internal face model, and ensuring the naturalness of modeled faces by avoiding unlikely appearances. Starting from a set of 3D face models, a morphable face model is derived by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of the example faces are used to guide manual modeling or automated matching algorithms. The paper demonstrates 3D face reconstructions from single images and their applications for photo-realistic image manipulations. It also shows face manipulations according to complex parameters such as gender, fullness of a face, or its distinctiveness. The morphable face model is a multidimensional 3D morphing function based on the linear combination of a large number of 3D face scans. By computing the average face and the main modes of variation in the dataset, a probability distribution is imposed on the morphing function to avoid unlikely faces. Parametric descriptions of face attributes such as gender, distinctiveness, "hooked" noses, or the weight of a person are derived by evaluating the distribution of exemplar faces for each attribute within the face space. The paper also describes an algorithm for matching the morphable model to novel images or 3D scans of faces. The algorithm can compute correspondence based on the morphable model. An iterative method is introduced for building a morphable model automatically from a raw dataset of 3D face scans when no correspondences between the exemplar faces are available. The database consists of 200 laser scans of young adults (100 male and 100 female). The scans provide head structure data in a cylindrical representation, with radii r(h,φ) of surface points sampled at 512 equally-spaced angles φ and 512 equally spaced vertical steps h. Additionally, RGB-color values R(h,φ), G(h,φ), and B(h,φ) were recorded in the same spatial resolution and stored in a texture map with 8 bits per channel. The morphable model is based on a dataset of 3D faces. Morphing between faces requires full correspondence between all of the faces. The model is defined by shape and texture vectors that contain the X, Y, Z coordinates of its vertices and the R, G, B color values of the corresponding vertices. The model is constructed using a dataset of m exemplar faces, each represented by its shape-vector S_i and texture-vector T_i. New shapes and textures can be expressed as linear combinations of the shapes and textures of the m exemplar faces. A mult
Reach us at info@study.space
[slides and audio] A Morphable Model For The Synthesis Of 3D Faces