A Muscle Model for Animating Three-Dimensional Facial Expression

A Muscle Model for Animating Three-Dimensional Facial Expression

1987 | Keith Waters
The paper presents a parameterized facial muscle model designed to create realistic facial animations. The model aims to address the limitations of existing methods, which often hard-wire specific actions and are not adaptable to different facial topologies. By using a limited number of parameters, the model allows for a richer vocabulary of facial expressions and a more general approach to modeling primary facial movements. The author discusses the structure of the face and proposes a simple method for modeling muscle processes suitable for various facial types. The research is motivated by the need to accurately represent diverse facial expressions, particularly for communication with the deaf and hard-of-hearing, as well as in computer-generated speech synthesis. The model avoids direct hard-wiring of performable actions and focuses on determining the motion bounds of key facial nodes. The paper also explores the physical properties of facial muscles and skin, and presents a computer model that simulates muscle actions using non-linear interpolants. The model is implemented as a parameter-driven program that can generate polygonal or vector descriptions for rendering. The author concludes by discussing future developments and acknowledges the support of several individuals and institutions.The paper presents a parameterized facial muscle model designed to create realistic facial animations. The model aims to address the limitations of existing methods, which often hard-wire specific actions and are not adaptable to different facial topologies. By using a limited number of parameters, the model allows for a richer vocabulary of facial expressions and a more general approach to modeling primary facial movements. The author discusses the structure of the face and proposes a simple method for modeling muscle processes suitable for various facial types. The research is motivated by the need to accurately represent diverse facial expressions, particularly for communication with the deaf and hard-of-hearing, as well as in computer-generated speech synthesis. The model avoids direct hard-wiring of performable actions and focuses on determining the motion bounds of key facial nodes. The paper also explores the physical properties of facial muscles and skin, and presents a computer model that simulates muscle actions using non-linear interpolants. The model is implemented as a parameter-driven program that can generate polygonal or vector descriptions for rendering. The author concludes by discussing future developments and acknowledges the support of several individuals and institutions.
Reach us at info@study.space
Understanding A muscle model for animation three-dimensional facial expression