3D Diffusion Policy (DP3) is a novel visual imitation learning approach that integrates 3D visual representations with diffusion policies, a class of conditional action generative models. The core of DP3 is the use of a compact 3D visual representation extracted from sparse point clouds using an efficient point encoder. In experiments involving 72 simulation tasks, DP3 achieved a 24.2% relative improvement over baselines with just 10 demonstrations. In real-world robot tasks, DP3 demonstrated precise control with an 85% success rate using only 40 demonstrations per task, showing excellent generalization across space, viewpoint, appearance, and instance. Notably, DP3 rarely violated safety requirements in real-world experiments, unlike baseline methods that often required human intervention. The code and videos are available at [3d-diffusion-policy.github.io](https://3d-diffusion-policy.github.io).3D Diffusion Policy (DP3) is a novel visual imitation learning approach that integrates 3D visual representations with diffusion policies, a class of conditional action generative models. The core of DP3 is the use of a compact 3D visual representation extracted from sparse point clouds using an efficient point encoder. In experiments involving 72 simulation tasks, DP3 achieved a 24.2% relative improvement over baselines with just 10 demonstrations. In real-world robot tasks, DP3 demonstrated precise control with an 85% success rate using only 40 demonstrations per task, showing excellent generalization across space, viewpoint, appearance, and instance. Notably, DP3 rarely violated safety requirements in real-world experiments, unlike baseline methods that often required human intervention. The code and videos are available at [3d-diffusion-policy.github.io](https://3d-diffusion-policy.github.io).