Stanley: The Robot that Won the DARPA Grand Challenge

Stanley: The Robot that Won the DARPA Grand Challenge

2006 | Sebastian Thrun, Mike Montemerlo, Hendrik Dahlkamp, David Stavens, Andrei Aron, James Diebel, Philip Fong, John Gale, Morgan Halpenny, Gabriel Hoffmann, Kenny Lau, Celia Oakley, Mark Palatucci, Vaughan Pratt, and Pascal Stang, Sven Strohband, Cedric Dupont, Lars-Erik Jendrossek, Christian Koelen, Charles Markey, Carlo Rummel, Joe van Niekerk, Eric Jensen, and Philippe Alessandrini, Gary Bradski, Bob Davies, Scott Ettinger, Adrian Kaehler, and Ara Nefian, Pamela Mahoney
Stanley, a robot developed by Stanford University and other institutions, won the 2005 DARPA Grand Challenge. The robot was designed for high-speed desert driving without manual intervention, using advanced artificial intelligence technologies such as machine learning and probabilistic reasoning. Stanley was based on a 2004 Volkswagen Touareg R5 TDI, equipped with a six-processor computing platform, sensors, and actuators for autonomous driving. The robot successfully completed the 142-mile course in 6 hours, 53 minutes, and 58 seconds, becoming the first robot to win the challenge. The main technological challenge was to build a highly reliable system capable of driving at high speeds through diverse and unstructured off-road environments with high precision. This led to advancements in autonomous navigation, including long-range terrain perception, real-time collision avoidance, and stable vehicle control on slippery and rugged terrain. Stanley's success was due to an intense development effort led by Stanford University and involving experts from Volkswagen of America, Mohr Davidow Ventures, Intel Research, and other entities. Stanley's software system was designed to treat autonomous navigation as a software problem. The system consisted of six layers: sensor interface, perception, control, vehicle interface, user interface, and global services. The sensor interface layer received and timestamped all sensor data, while the perception layer mapped sensor data into internal models. The control layer regulated the steering, throttle, and brake response of the vehicle. The vehicle interface layer served as the interface to the robot's drive-by-wire system, and the user interface layer included the remote E-stop and a touch-screen module for starting up the software. The global services layer provided basic services for all software modules, including naming and communication services, a centralized parameter server, a power server, a health monitor, and a time server. The software architecture emphasized reliability, with special modules monitoring the health of individual software and hardware components and automatically restarting or power-cycling such components when a failure is observed. Stanley's vehicle state estimation used an unscented Kalman filter (UKF) to estimate the vehicle's coordinates, orientation, and velocities. The UKF incorporated observations from the GPS, GPS compass, IMU, and wheel encoders. The UKF performed poorly during GPS outages, so a more restrictive motion model was used during such times. The vehicle's terrain mapping used laser range finders and a color camera to detect drivable surfaces and obstacles. Stanley's computer vision system used a mixture of Gaussians to model the color of drivable terrain and adapt to new terrain within seconds. Stanley's road property estimation used probabilistic low-pass filters to determine the road boundaries based on the laser map. The road center was defined as the center of the two boundaries, and the lateral offset was a component in scoring trajectories during path planning. Stanley's ability to stay near the center of the road helped avoid most obstacles on desert roads. TheStanley, a robot developed by Stanford University and other institutions, won the 2005 DARPA Grand Challenge. The robot was designed for high-speed desert driving without manual intervention, using advanced artificial intelligence technologies such as machine learning and probabilistic reasoning. Stanley was based on a 2004 Volkswagen Touareg R5 TDI, equipped with a six-processor computing platform, sensors, and actuators for autonomous driving. The robot successfully completed the 142-mile course in 6 hours, 53 minutes, and 58 seconds, becoming the first robot to win the challenge. The main technological challenge was to build a highly reliable system capable of driving at high speeds through diverse and unstructured off-road environments with high precision. This led to advancements in autonomous navigation, including long-range terrain perception, real-time collision avoidance, and stable vehicle control on slippery and rugged terrain. Stanley's success was due to an intense development effort led by Stanford University and involving experts from Volkswagen of America, Mohr Davidow Ventures, Intel Research, and other entities. Stanley's software system was designed to treat autonomous navigation as a software problem. The system consisted of six layers: sensor interface, perception, control, vehicle interface, user interface, and global services. The sensor interface layer received and timestamped all sensor data, while the perception layer mapped sensor data into internal models. The control layer regulated the steering, throttle, and brake response of the vehicle. The vehicle interface layer served as the interface to the robot's drive-by-wire system, and the user interface layer included the remote E-stop and a touch-screen module for starting up the software. The global services layer provided basic services for all software modules, including naming and communication services, a centralized parameter server, a power server, a health monitor, and a time server. The software architecture emphasized reliability, with special modules monitoring the health of individual software and hardware components and automatically restarting or power-cycling such components when a failure is observed. Stanley's vehicle state estimation used an unscented Kalman filter (UKF) to estimate the vehicle's coordinates, orientation, and velocities. The UKF incorporated observations from the GPS, GPS compass, IMU, and wheel encoders. The UKF performed poorly during GPS outages, so a more restrictive motion model was used during such times. The vehicle's terrain mapping used laser range finders and a color camera to detect drivable surfaces and obstacles. Stanley's computer vision system used a mixture of Gaussians to model the color of drivable terrain and adapt to new terrain within seconds. Stanley's road property estimation used probabilistic low-pass filters to determine the road boundaries based on the laser map. The road center was defined as the center of the two boundaries, and the lateral offset was a component in scoring trajectories during path planning. Stanley's ability to stay near the center of the road helped avoid most obstacles on desert roads. The
Reach us at info@study.space