nuScenes.org: A first-of-its-kind public dataset for autonomous driving

On behalf of the team at Aptiv, I am proud to share nuScenes — a public dataset for autonomous driving. nuScenes is the first large-scale dataset to provide information from a comprehensive autonomous vehicle (AV) sensor suite, including data from 6 cameras, 1 LIDAR, 5 RADAR, GPS & IMU. In addition, the nuScenes data is annotated at 2 Hz with 1.1 million 3D bounding boxes from 25 classes, with 8 attributes, such as visibility, activity and pose. Not only is it a standout among other recent autonomous dataset releases (which typically offer information from only a single modality), it is an order of magnitude larger, and substantially richer, than the KITTI dataset (a benchmark multi-modal dataset).

By releasing nuScenes, we aim to broadly support research into computer vision and autonomous driving by AV innovators and academic researchers. Our team believes that knowledge sharing—between corporations, startups, academia, governmental agencies, and other entities—will enable robust progress and innovation in the AV industry.

We believe that sharing data that specifically enable the development of safer, smarter AV systems will yield benefits across the entire AV industry by facilitating advancements in AV technology that will ultimately lead to improved autonomous transportation in cities worldwide.

nuScenes came to life when Oscar Beijbom, our machine learning lead, identified a gap in the set of available public benchmarks. While a rich suite of image-based dataset (such as ImageNet, Berkeley Deep Drive and Cityscapes) were available--and had enabled immense progress in deep learning-based methods for vision--multimodal data sets were lacking.  Oscar led the development of nuScenes in order to fill this important gap.

Our nuScenes team has collected 1,000 street scenes in Boston and Singapore, cities known for their dense traffic and challenging driving situations. The data was then annotated by Scale, our annotation partner, with 3d bounding boxes. The richness and complexity of the nuScenes dataset is intended to encourage the development of methods that enable safe driving in urban areas that contain many of objects per scene. Furthermore, inclusion of data from multiple continents allows for the study of algorithm performance in distinct weather conditions, with distinct vehicle types, vegetation, road markings, and driving rules (i.e., left-hand and right-hand traffic).

nuScenes will grow over time. The final dataset, scheduled for release in March 2019, will include approximately 1.4M camera images, 400k LIDAR sweeps, 1.3M RADAR sweeps and 1.1M object bounding boxes in 40k keyframes. Additionally, in 2019, the nuScenes team will organize challenges on various computer vision tasks to provide a benchmark for state-of-the-art methods.

Visit nuScenes.org to explore the dataset and to keep abreast of updates.

On behalf of the team at Aptiv, I am proud to share nuScenes — a public dataset for autonomous driving. nuScenes is the first large-scale dataset to provide information from a comprehensive autonomous vehicle (AV) sensor suite, including data from 6 cameras, 1 LIDAR, 5 RADAR, GPS & IMU. In addition, the nuScenes data is annotated at 2 Hz with 1.1 million 3D bounding boxes from 25 classes, with 8 attributes, such as visibility, activity and pose. Not only is it a standout among other recent autonomous dataset releases (which typically offer information from only a single modality), it is an order of magnitude larger, and substantially richer, than the KITTI dataset (a benchmark multi-modal dataset).

By releasing nuScenes, we aim to broadly support research into computer vision and autonomous driving by AV innovators and academic researchers. Our team believes that knowledge sharing—between corporations, startups, academia, governmental agencies, and other entities—will enable robust progress and innovation in the AV industry.

We believe that sharing data that specifically enable the development of safer, smarter AV systems will yield benefits across the entire AV industry by facilitating advancements in AV technology that will ultimately lead to improved autonomous transportation in cities worldwide.

nuScenes came to life when Oscar Beijbom, our machine learning lead, identified a gap in the set of available public benchmarks. While a rich suite of image-based dataset (such as ImageNet, Berkeley Deep Drive and Cityscapes) were available--and had enabled immense progress in deep learning-based methods for vision--multimodal data sets were lacking.  Oscar led the development of nuScenes in order to fill this important gap.

Our nuScenes team has collected 1,000 street scenes in Boston and Singapore, cities known for their dense traffic and challenging driving situations. The data was then annotated by Scale, our annotation partner, with 3d bounding boxes. The richness and complexity of the nuScenes dataset is intended to encourage the development of methods that enable safe driving in urban areas that contain many of objects per scene. Furthermore, inclusion of data from multiple continents allows for the study of algorithm performance in distinct weather conditions, with distinct vehicle types, vegetation, road markings, and driving rules (i.e., left-hand and right-hand traffic).

nuScenes will grow over time. The final dataset, scheduled for release in March 2019, will include approximately 1.4M camera images, 400k LIDAR sweeps, 1.3M RADAR sweeps and 1.1M object bounding boxes in 40k keyframes. Additionally, in 2019, the nuScenes team will organize challenges on various computer vision tasks to provide a benchmark for state-of-the-art methods.

Visit nuScenes.org to explore the dataset and to keep abreast of updates.

Authors
Karl Iagnemma
Karl Iagnemma
President, Aptiv Autonomous Mobility

Careers


Shape the future of mobility. Join our team to help create vehicles that are safer, greener and more connected.

View Related Jobs

Subscribe