2001 February | Ying-li Tian [Member, IEEE], Takeo Kanade [Fellow, IEEE], and Jeffrey F. Cohn [Member, IEEE]
This paper presents an Automatic Face Analysis (AFA) system for facial expression analysis that recognizes fine-grained changes in facial expressions into action units (AUs) of the Facial Action Coding System (FACS), rather than a few prototypic expressions. The system uses multistate face and facial component models to track and model various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. These parameters are used to recognize AUs, whether they occur alone or in combinations. The system achieves high recognition rates for both upper and lower face AUs, with average recognition rates of 96.4% and 96.7%, respectively. The system has been tested on independent image databases and has shown good generalizability. The AFA system uses neural networks for AU recognition, with one network for the upper face and another for the lower face. The system recognizes 16 of the 30 AUs that have a specific anatomical basis and occur frequently in emotion and paralinguistic communication. The system also includes methods for extracting and tracking facial features, including permanent and transient features. The system has been evaluated on multiple databases and has shown high performance in recognizing AUs, both in single and combined forms. The system has been compared with other AU recognition systems and has shown superior performance in recognizing AUs, especially in tests with novel faces or when using independent databases for training and testing. The system has been found to be effective in recognizing facial expressions and has potential applications in areas such as human identification and multimodal user interfaces.This paper presents an Automatic Face Analysis (AFA) system for facial expression analysis that recognizes fine-grained changes in facial expressions into action units (AUs) of the Facial Action Coding System (FACS), rather than a few prototypic expressions. The system uses multistate face and facial component models to track and model various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. These parameters are used to recognize AUs, whether they occur alone or in combinations. The system achieves high recognition rates for both upper and lower face AUs, with average recognition rates of 96.4% and 96.7%, respectively. The system has been tested on independent image databases and has shown good generalizability. The AFA system uses neural networks for AU recognition, with one network for the upper face and another for the lower face. The system recognizes 16 of the 30 AUs that have a specific anatomical basis and occur frequently in emotion and paralinguistic communication. The system also includes methods for extracting and tracking facial features, including permanent and transient features. The system has been evaluated on multiple databases and has shown high performance in recognizing AUs, both in single and combined forms. The system has been compared with other AU recognition systems and has shown superior performance in recognizing AUs, especially in tests with novel faces or when using independent databases for training and testing. The system has been found to be effective in recognizing facial expressions and has potential applications in areas such as human identification and multimodal user interfaces.