7 Feb 2024 | ALOHA 2 Team, Jorge Aldaco1, Travis Armstrong1, Robert Baruch1, Jeff Bingham1, Sanky Chan1, Kenneth Draper1, Debidatta Dwibedi1, Chelsea Finn1,2, Pete Florence1, Spencer Goodrich1, Wayne Gramlich1, Torr Hage1, Alexander Herzog1, Jonathan Hoech1, Thinh Nguyen1, Ian Storz1, Baruch Tabanpour1, Leila Takayama1,3, Jonathan Tompson1, Ayzaan Wahid1, Ted Wahrburg1, Sichun Xu1, Sergey Yaroshenko1, Kevin Zakka1 and Tony Z. Zhao1,2
**ALOHA 2: An Enhanced Low-Cost Hardware for Bimanual Teleoperation**
The ALOHA 2 team, comprising researchers from Google DeepMind, Stanford University, and Hoku Labs, introduces ALOHA 2, an enhanced version of the original ALOHA platform designed for bimanual teleoperation. ALOHA 2 aims to improve performance, ergonomics, and robustness while reducing hardware costs and complexity. Key improvements include:
1. **Grippers**: New low-friction rail designs for both leader and follower grippers, enhancing teleoperation ergonomics and responsiveness.
2. **Gravity Compensation**: A passive gravity compensation mechanism using off-the-shelf components, improving durability and reducing operator strain.
3. **Frame**: Simplified frame design with additional space for human-robot interaction and props, maintaining rigidity for camera mounting.
4. **Cameras**: Upgraded to smaller Intel RealSense D405 cameras with larger fields of view, depth capabilities, and global shutter, reducing the footprint of the follower arms.
5. **Simulation**: A detailed MuJoCo model of ALOHA 2 with system identification, enabling high-quality data collection and policy learning in simulation.
These enhancements make it easier to collect large datasets for complex manipulation tasks, such as folding a T-shirt, tying a knot, or throwing objects. The team also provides a detailed tutorial and open-source hardware designs to facilitate research and collaboration. The project website is available at [alooha-2.github.io](https://github.com/alooha-2/alooha-2).
**Conclusion**
ALOHA 2 is a low-cost, high-performance system designed for bimanual teleoperation, aiming to advance research in robot learning through large-scale data collection. The team hopes that the open-source nature of ALOHA 2 will accelerate progress in fine-grained bimanual manipulation tasks.**ALOHA 2: An Enhanced Low-Cost Hardware for Bimanual Teleoperation**
The ALOHA 2 team, comprising researchers from Google DeepMind, Stanford University, and Hoku Labs, introduces ALOHA 2, an enhanced version of the original ALOHA platform designed for bimanual teleoperation. ALOHA 2 aims to improve performance, ergonomics, and robustness while reducing hardware costs and complexity. Key improvements include:
1. **Grippers**: New low-friction rail designs for both leader and follower grippers, enhancing teleoperation ergonomics and responsiveness.
2. **Gravity Compensation**: A passive gravity compensation mechanism using off-the-shelf components, improving durability and reducing operator strain.
3. **Frame**: Simplified frame design with additional space for human-robot interaction and props, maintaining rigidity for camera mounting.
4. **Cameras**: Upgraded to smaller Intel RealSense D405 cameras with larger fields of view, depth capabilities, and global shutter, reducing the footprint of the follower arms.
5. **Simulation**: A detailed MuJoCo model of ALOHA 2 with system identification, enabling high-quality data collection and policy learning in simulation.
These enhancements make it easier to collect large datasets for complex manipulation tasks, such as folding a T-shirt, tying a knot, or throwing objects. The team also provides a detailed tutorial and open-source hardware designs to facilitate research and collaboration. The project website is available at [alooha-2.github.io](https://github.com/alooha-2/alooha-2).
**Conclusion**
ALOHA 2 is a low-cost, high-performance system designed for bimanual teleoperation, aiming to advance research in robot learning through large-scale data collection. The team hopes that the open-source nature of ALOHA 2 will accelerate progress in fine-grained bimanual manipulation tasks.