LivePortrait is an efficient framework for portrait animation that enables realistic and expressive animation of static portraits with precise control over facial expressions and movements. The framework leverages implicit-keypoint-based methods, which balance computational efficiency and controllability. By scaling up training data to 69 million high-quality frames and employing a mixed image-video training strategy, the model achieves high generalization and performance. It introduces stitching and retargeting modules that use small MLPs with minimal computational overhead to enhance controllability, allowing for precise control over eye and lip movements. The model generates portrait animations in 12.8ms on an RTX 4090 GPU using PyTorch. Experimental results show that LivePortrait outperforms diffusion-based methods in terms of generation quality and efficiency. The framework is capable of handling large poses and multiple faces, and it can be extended to audio-driven and animal portrait animation. Despite its strengths, the model has limitations in cross-reenactment scenarios with large pose variations and may produce jitter in certain driving videos. Ethical considerations regarding the potential misuse of portrait animation technology are also discussed.LivePortrait is an efficient framework for portrait animation that enables realistic and expressive animation of static portraits with precise control over facial expressions and movements. The framework leverages implicit-keypoint-based methods, which balance computational efficiency and controllability. By scaling up training data to 69 million high-quality frames and employing a mixed image-video training strategy, the model achieves high generalization and performance. It introduces stitching and retargeting modules that use small MLPs with minimal computational overhead to enhance controllability, allowing for precise control over eye and lip movements. The model generates portrait animations in 12.8ms on an RTX 4090 GPU using PyTorch. Experimental results show that LivePortrait outperforms diffusion-based methods in terms of generation quality and efficiency. The framework is capable of handling large poses and multiple faces, and it can be extended to audio-driven and animal portrait animation. Despite its strengths, the model has limitations in cross-reenactment scenarios with large pose variations and may produce jitter in certain driving videos. Ethical considerations regarding the potential misuse of portrait animation technology are also discussed.