Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving

Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving

in press | Dean A. Pomerleau
This chapter discusses the development and training of ALVINN (Autonomous Land Vehicle In a Neural Network), an artificial neural network designed to control the Navlab, a Carnegie Mellon autonomous driving test vehicle. The primary challenge addressed is the flexibility and efficiency required for autonomous driving in a constantly changing environment, such as single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden environments, at speeds up to 55 miles per hour. The chapter outlines the neural network architecture, which consists of a single hidden layer feedforward neural network with a 30x32 unit "retina" input layer and a 30-unit output layer representing steering directions. The network is trained using backpropagation, with the training data consisting of live sensor images and the human driver's steering directions. To address the challenges of training on real-time sensor data, the authors developed techniques to transform sensor images and steering directions. These transformations include shifting and rotating images to create additional training exemplars, and adjusting steering directions based on the vehicle's position and orientation. The training process also involves maintaining a buffer of previously encountered training patterns to prevent overtraining on repetitive images. The effectiveness of these techniques is demonstrated through experiments, showing that the network can learn to drive accurately in various conditions, with a significant improvement in steering accuracy compared to networks trained using only real sensor data. The chapter concludes by discussing the flexibility and adaptability of ALVINN, highlighting its ability to handle a wide range of driving situations and the ongoing efforts to combine multiple domain-specific networks for more robust performance.This chapter discusses the development and training of ALVINN (Autonomous Land Vehicle In a Neural Network), an artificial neural network designed to control the Navlab, a Carnegie Mellon autonomous driving test vehicle. The primary challenge addressed is the flexibility and efficiency required for autonomous driving in a constantly changing environment, such as single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden environments, at speeds up to 55 miles per hour. The chapter outlines the neural network architecture, which consists of a single hidden layer feedforward neural network with a 30x32 unit "retina" input layer and a 30-unit output layer representing steering directions. The network is trained using backpropagation, with the training data consisting of live sensor images and the human driver's steering directions. To address the challenges of training on real-time sensor data, the authors developed techniques to transform sensor images and steering directions. These transformations include shifting and rotating images to create additional training exemplars, and adjusting steering directions based on the vehicle's position and orientation. The training process also involves maintaining a buffer of previously encountered training patterns to prevent overtraining on repetitive images. The effectiveness of these techniques is demonstrated through experiments, showing that the network can learn to drive accurately in various conditions, with a significant improvement in steering accuracy compared to networks trained using only real sensor data. The chapter concludes by discussing the flexibility and adaptability of ALVINN, highlighting its ability to handle a wide range of driving situations and the ongoing efforts to combine multiple domain-specific networks for more robust performance.
Reach us at info@study.space