5 May 2020 | Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, Oscar Beijbom
nuScenes is a multimodal dataset for autonomous driving, developed by nuTonomy. It includes 6 cameras, 5 radars, and 1 lidar, providing full 360-degree sensor coverage. The dataset contains 1000 scenes, each 20 seconds long, with detailed 3D bounding boxes for 23 classes and 8 attributes. It has significantly more annotations and images than the KITTI dataset. The dataset includes novel 3D detection and tracking metrics, as well as detailed analysis and baselines for lidar and image-based detection and tracking. nuScenes is the first dataset to include radar data and is the first to provide full sensor coverage for autonomous vehicles. It also includes semantic maps and scene descriptions, enabling a wide range of tasks including detection, tracking, prediction, and localization. The dataset has been widely adopted by the autonomous driving community and has been used for various research tasks such as 3D object detection, multi-agent forecasting, pedestrian localization, and weather augmentation. The dataset is available under a CC BY-NC-SA 4.0 license and includes a devkit, evaluation code, and database schema for industry-wide standardization. The nuScenes dataset has been shown to significantly improve the performance of detection and tracking methods, with results showing a 40% and 81% improvement over previous state-of-the-art methods. The dataset has also been used to evaluate the performance of different detection and tracking methods, including lidar-based and camera-based approaches. The dataset includes a wide range of scenarios, including different weather conditions, lighting, and traffic situations, making it a valuable resource for research in autonomous driving.nuScenes is a multimodal dataset for autonomous driving, developed by nuTonomy. It includes 6 cameras, 5 radars, and 1 lidar, providing full 360-degree sensor coverage. The dataset contains 1000 scenes, each 20 seconds long, with detailed 3D bounding boxes for 23 classes and 8 attributes. It has significantly more annotations and images than the KITTI dataset. The dataset includes novel 3D detection and tracking metrics, as well as detailed analysis and baselines for lidar and image-based detection and tracking. nuScenes is the first dataset to include radar data and is the first to provide full sensor coverage for autonomous vehicles. It also includes semantic maps and scene descriptions, enabling a wide range of tasks including detection, tracking, prediction, and localization. The dataset has been widely adopted by the autonomous driving community and has been used for various research tasks such as 3D object detection, multi-agent forecasting, pedestrian localization, and weather augmentation. The dataset is available under a CC BY-NC-SA 4.0 license and includes a devkit, evaluation code, and database schema for industry-wide standardization. The nuScenes dataset has been shown to significantly improve the performance of detection and tracking methods, with results showing a 40% and 81% improvement over previous state-of-the-art methods. The dataset has also been used to evaluate the performance of different detection and tracking methods, including lidar-based and camera-based approaches. The dataset includes a wide range of scenarios, including different weather conditions, lighting, and traffic situations, making it a valuable resource for research in autonomous driving.