Robotic Grasping of Novel Objects using Vision

Robotic Grasping of Novel Objects using Vision

| Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng
This paper presents a learning-based approach for robotic grasping of novel objects using vision. The method does not require a 3D model of the object and instead identifies a few key points in 2D images that correspond to good grasping locations. These points are then triangulated to determine a 3D grasp location. The algorithm is trained using synthetic images and has been tested on a variety of objects, including plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, and a thick coil of wire. It also successfully unloads items from dishwashers, even when the objects are textureless, translucent, or reflective. The algorithm uses a probabilistic model to infer grasp points from 2D images and has been tested on two robotic platforms: STAIR 1 (5-dof arm) and STAIR 2 (7-dof arm). The algorithm achieves high success rates in grasping objects, with an average success rate of 87.8% for novel objects. It also performs well in unloading items from dishwashers, achieving an average success rate of 80% for four object classes. The algorithm is robust to clutter and occlusion and can generalize well to new objects. The method is trained using synthetic data and has been shown to transfer well to real-world scenarios. The algorithm uses a combination of image and depth features to improve performance and can be applied to a wide range of robotic manipulation tasks.This paper presents a learning-based approach for robotic grasping of novel objects using vision. The method does not require a 3D model of the object and instead identifies a few key points in 2D images that correspond to good grasping locations. These points are then triangulated to determine a 3D grasp location. The algorithm is trained using synthetic images and has been tested on a variety of objects, including plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, and a thick coil of wire. It also successfully unloads items from dishwashers, even when the objects are textureless, translucent, or reflective. The algorithm uses a probabilistic model to infer grasp points from 2D images and has been tested on two robotic platforms: STAIR 1 (5-dof arm) and STAIR 2 (7-dof arm). The algorithm achieves high success rates in grasping objects, with an average success rate of 87.8% for novel objects. It also performs well in unloading items from dishwashers, achieving an average success rate of 80% for four object classes. The algorithm is robust to clutter and occlusion and can generalize well to new objects. The method is trained using synthetic data and has been shown to transfer well to real-world scenarios. The algorithm uses a combination of image and depth features to improve performance and can be applied to a wide range of robotic manipulation tasks.
Reach us at info@study.space
[slides] Robotic Grasping of Novel Objects using Vision | StudySpace