This chapter presents ALVINN, an artificial neural network designed to control the Carnegie Mellon University Navlab autonomous driving test vehicle. ALVINN is a single hidden layer feedforward neural network that processes images from a video camera or scanning laser rangefinder, projecting them onto a 30x32 input "retina." The network's output layer provides a linear representation of the steering direction, allowing the vehicle to stay on the road or avoid obstacles. The network is trained using backpropagation, where the network learns to mimic a human driver's steering behavior by observing the driver's responses to new situations. Training is done on-the-fly, using real-time sensor data, which allows the network to adapt quickly to new situations.
To increase the diversity of training data, the system transforms sensor images by shifting and rotating them, creating additional training examples where the vehicle appears in different positions relative to the environment. This technique helps the network learn to recover from misalignment errors and avoid overlearning recent inputs. Additionally, the system uses a buffer to store previously encountered training patterns, ensuring a diverse training set and preventing overtraining on repetitive images.
The network's steering direction is determined using a pure pursuit model, which calculates the steering radius based on the vehicle's position and the desired target point. This model allows the network to accurately predict steering directions, closely matching human behavior. The system also includes a buffer to maintain a balanced steering direction, ensuring the network does not favor one steering direction over another.
The performance of ALVINN is demonstrated through experiments showing that it can drive accurately in a wide range of situations, including single-lane roads, multi-lane highways, and obstacle-rich environments. The system's ability to adapt to new situations and its flexibility in handling various driving scenarios make it a significant advancement in autonomous navigation. The use of connectionist techniques allows ALVINN to learn and generalize from training data, making it a robust and efficient system for autonomous driving.This chapter presents ALVINN, an artificial neural network designed to control the Carnegie Mellon University Navlab autonomous driving test vehicle. ALVINN is a single hidden layer feedforward neural network that processes images from a video camera or scanning laser rangefinder, projecting them onto a 30x32 input "retina." The network's output layer provides a linear representation of the steering direction, allowing the vehicle to stay on the road or avoid obstacles. The network is trained using backpropagation, where the network learns to mimic a human driver's steering behavior by observing the driver's responses to new situations. Training is done on-the-fly, using real-time sensor data, which allows the network to adapt quickly to new situations.
To increase the diversity of training data, the system transforms sensor images by shifting and rotating them, creating additional training examples where the vehicle appears in different positions relative to the environment. This technique helps the network learn to recover from misalignment errors and avoid overlearning recent inputs. Additionally, the system uses a buffer to store previously encountered training patterns, ensuring a diverse training set and preventing overtraining on repetitive images.
The network's steering direction is determined using a pure pursuit model, which calculates the steering radius based on the vehicle's position and the desired target point. This model allows the network to accurately predict steering directions, closely matching human behavior. The system also includes a buffer to maintain a balanced steering direction, ensuring the network does not favor one steering direction over another.
The performance of ALVINN is demonstrated through experiments showing that it can drive accurately in a wide range of situations, including single-lane roads, multi-lane highways, and obstacle-rich environments. The system's ability to adapt to new situations and its flexibility in handling various driving scenarios make it a significant advancement in autonomous navigation. The use of connectionist techniques allows ALVINN to learn and generalize from training data, making it a robust and efficient system for autonomous driving.