N. Ando et al. (Eds.): SIMPAR 2010, LNAI 6472, pp. 288–299, 2010. | Krishna Kumar Narayanan, Luis Felipe Posada, Frank Hoffmann, and Torsten Bertram
The paper "Robot Programming by Demonstration" by Krishna Kumar Narayanan, Luis Felipe Posada, Frank Hoffmann, and Torsten Bertram addresses the challenge of designing robust visual robotic behaviors for autonomous navigation in complex indoor environments. The authors propose a framework that integrates 3D range and omnidirectional cameras to learn visual navigation behaviors from demonstration examples. The approach involves mapping visual features from the omnidirectional image onto corresponding robot movements, using locally weighted regression and artificial neural networks to identify discriminant features. Extensive tests demonstrate that the learned visual behavior is robust and can generalize to unseen environments. The framework consists of three stages: imitation of sonar-based navigation, explicit human demonstration, and learning with a critic based on the robot's experiences. The key challenge is generalizing perceptions and actions from demonstrated scenarios to novel situations, which the authors tackle by classifying environmental contexts into prototypical classes and using these classifications to match the current situation with demonstrated scenarios.The paper "Robot Programming by Demonstration" by Krishna Kumar Narayanan, Luis Felipe Posada, Frank Hoffmann, and Torsten Bertram addresses the challenge of designing robust visual robotic behaviors for autonomous navigation in complex indoor environments. The authors propose a framework that integrates 3D range and omnidirectional cameras to learn visual navigation behaviors from demonstration examples. The approach involves mapping visual features from the omnidirectional image onto corresponding robot movements, using locally weighted regression and artificial neural networks to identify discriminant features. Extensive tests demonstrate that the learned visual behavior is robust and can generalize to unseen environments. The framework consists of three stages: imitation of sonar-based navigation, explicit human demonstration, and learning with a critic based on the robot's experiences. The key challenge is generalizing perceptions and actions from demonstrated scenarios to novel situations, which the authors tackle by classifying environmental contexts into prototypical classes and using these classifications to match the current situation with demonstrated scenarios.