GaussianGrasper is a novel approach for open-world robotic grasping guided by natural language instructions. The method uses 3D Gaussian Splatting to explicitly represent the scene as a collection of Gaussian primitives, enabling efficient and accurate scene reconstruction. It addresses the challenges of language-guided robotic manipulation by reconstructing a 3D feature field through efficient feature distillation, which allows the robot to understand and localize objects based on language queries. The method also generates feasible grasp poses using a normal-guided grasp module that filters out unfeasible poses based on surface normals and force-closure theory. After manipulation, the scene is updated by operating Gaussian primitives and fine-tuning the 3D Gaussian field with fewer views. The system demonstrates strong performance in real-world experiments, achieving accurate object localization and grasping with language instructions. GaussianGrasper provides a new solution for language-guided manipulation tasks, enabling robots to interact with objects in cluttered environments. The method is efficient, scalable, and capable of handling dynamic scenes. It outperforms existing approaches in terms of accuracy, speed, and scene updating capabilities. The system is implemented using RGB-D inputs and leverages 3D Gaussian Splatting for efficient scene reconstruction and manipulation. The method is validated through extensive experiments on real-world scenes, demonstrating its effectiveness in open-world robotic manipulation.GaussianGrasper is a novel approach for open-world robotic grasping guided by natural language instructions. The method uses 3D Gaussian Splatting to explicitly represent the scene as a collection of Gaussian primitives, enabling efficient and accurate scene reconstruction. It addresses the challenges of language-guided robotic manipulation by reconstructing a 3D feature field through efficient feature distillation, which allows the robot to understand and localize objects based on language queries. The method also generates feasible grasp poses using a normal-guided grasp module that filters out unfeasible poses based on surface normals and force-closure theory. After manipulation, the scene is updated by operating Gaussian primitives and fine-tuning the 3D Gaussian field with fewer views. The system demonstrates strong performance in real-world experiments, achieving accurate object localization and grasping with language instructions. GaussianGrasper provides a new solution for language-guided manipulation tasks, enabling robots to interact with objects in cluttered environments. The method is efficient, scalable, and capable of handling dynamic scenes. It outperforms existing approaches in terms of accuracy, speed, and scene updating capabilities. The system is implemented using RGB-D inputs and leverages 3D Gaussian Splatting for efficient scene reconstruction and manipulation. The method is validated through extensive experiments on real-world scenes, demonstrating its effectiveness in open-world robotic manipulation.