Practice Makes Perfect: Planning to Learn Skill Parameter Policies

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

18 May 2024 | Nishanth Kumar, Tom Silver, Willie McClinton, Linfeng Zhao, Stephen Proulx, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Jennifer Barry
The paper "Practice Makes Perfect: Planning to Learn Skill Parameter Policies" by Nishanth Kumar et al. addresses the challenge of effective robot decision-making in complex, long-horizon tasks by sequencing parameterized skills. The authors propose a method called Estimate, Extrapolate & Situate (EES) to guide the robot in selecting which skills to practice to maximize future task success. EES involves estimating the competence of each skill, extrapolating how competence would improve with practice, and situating the competence in the task distribution. The robot plans to practice skills, chains them together to reach a goal state, and updates its parameter policies based on the outcomes. Experiments in both simulated and real-world environments demonstrate that EES enables the robot to learn effective parameter policies more efficiently than several baselines, improving its ability to solve long-horizon mobile-manipulation tasks after a few hours of autonomous practice. The approach leverages AI planning and active learning to handle noise in perception and control, making it robust and adaptable to real-world conditions.The paper "Practice Makes Perfect: Planning to Learn Skill Parameter Policies" by Nishanth Kumar et al. addresses the challenge of effective robot decision-making in complex, long-horizon tasks by sequencing parameterized skills. The authors propose a method called Estimate, Extrapolate & Situate (EES) to guide the robot in selecting which skills to practice to maximize future task success. EES involves estimating the competence of each skill, extrapolating how competence would improve with practice, and situating the competence in the task distribution. The robot plans to practice skills, chains them together to reach a goal state, and updates its parameter policies based on the outcomes. Experiments in both simulated and real-world environments demonstrate that EES enables the robot to learn effective parameter policies more efficiently than several baselines, improving its ability to solve long-horizon mobile-manipulation tasks after a few hours of autonomous practice. The approach leverages AI planning and active learning to handle noise in perception and control, making it robust and adaptable to real-world conditions.
Reach us at info@study.space
[slides and audio] Practice Makes Perfect%3A Planning to Learn Skill Parameter Policies