Sim-to-Real Transfer of Robotic Control with Dynamics Randomization

Sim-to-Real Transfer of Robotic Control with Dynamics Randomization

3 Mar 2018 | Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel
This paper presents a method for sim-to-real transfer of robotic control using dynamics randomization. The key idea is to train policies in simulation with randomized dynamics, enabling them to adapt to real-world dynamics without additional training on the physical system. The approach is demonstrated on a robotic arm task involving object pushing. Policies trained exclusively in simulation are able to perform the task on a real robot with similar performance, despite significant calibration errors between the simulated and real environments. The method involves randomizing various dynamics parameters during training, including mass, damping, friction, and observation noise. This randomness helps policies generalize to real-world dynamics. The policies are trained using a recurrent deterministic policy gradient (RDPG) algorithm with hindsight experience replay (HER) to handle sparse rewards. The paper also explores the impact of different design decisions, showing that policies trained with randomized dynamics are robust to calibration errors. The approach is evaluated on a 7-DOF Fetch Robotics arm, where policies trained in simulation are able to adapt to real-world dynamics and perform the pushing task successfully. The results show that the proposed method enables effective sim-to-real transfer of robotic control, with policies trained in simulation performing well on real robots. The method is robust to significant calibration errors and generalizes to different dynamics. The paper also discusses the importance of design decisions in training policies for sim-to-real transfer, and suggests future work to extend the approach to more complex tasks and incorporate additional modalities such as vision.This paper presents a method for sim-to-real transfer of robotic control using dynamics randomization. The key idea is to train policies in simulation with randomized dynamics, enabling them to adapt to real-world dynamics without additional training on the physical system. The approach is demonstrated on a robotic arm task involving object pushing. Policies trained exclusively in simulation are able to perform the task on a real robot with similar performance, despite significant calibration errors between the simulated and real environments. The method involves randomizing various dynamics parameters during training, including mass, damping, friction, and observation noise. This randomness helps policies generalize to real-world dynamics. The policies are trained using a recurrent deterministic policy gradient (RDPG) algorithm with hindsight experience replay (HER) to handle sparse rewards. The paper also explores the impact of different design decisions, showing that policies trained with randomized dynamics are robust to calibration errors. The approach is evaluated on a 7-DOF Fetch Robotics arm, where policies trained in simulation are able to adapt to real-world dynamics and perform the pushing task successfully. The results show that the proposed method enables effective sim-to-real transfer of robotic control, with policies trained in simulation performing well on real robots. The method is robust to significant calibration errors and generalizes to different dynamics. The paper also discusses the importance of design decisions in training policies for sim-to-real transfer, and suggests future work to extend the approach to more complex tasks and incorporate additional modalities such as vision.
Reach us at info@study.space
Understanding Sim-to-Real Transfer of Robotic Control with Dynamics Randomization