Last updated
Was this helpful?
Last updated
Was this helpful?
Easily import your pointclouds with annotations in the nuScenes format.
The original nuScenes dataset is a comprehensive, large-scale public dataset designed for autonomous driving research, developed by Motional. It includes 1,000 diverse 20-second driving scenes from Boston and Singapore, featuring dense traffic and challenging conditions. With 1.4M camera images, 390k LIDAR sweeps, and 1.4M RADAR sweeps, it provides extensive multimodal sensor data, including annotations for 23 object classes with 3D bounding boxes at 2Hz and object-level attributes like visibility and activity.
Please note that the original dataset's format is best suited for point cloud episodes modality. That said, it is available to be imported as point cloud.
Supported point cloud format: .bin
With annotations: yes
Supported annotation format: .json
Data structure: Information is provided below.
Example data: .
Both directory and archive are supported.
Format directory structure:
Every .bin
file in a sequence has to be stored inside a LIDAR_TOP
folder of 'sample' and 'sweeps' folders of dataset.
The nuScenes annotations are token-based. The dataset is composed of dozens of scenes, each containing a certain amount of samples (pointclouds). Samples are linked to annotation data through the use of unique identifiers. These tokens ensure that each point cloud frame is accurately associated with its corresponding annotations, camera data, log information, maps, etc. maintaining the integrity and structure of the dataset.
attribute.json
The attribute.json
file contains metadata about the attributes of objects in the dataset. An attribute is a property of an instance that can change while the category remains the same. Example: a vehicle being parked/stopped/moving, and whether or not a bicycle has a rider.
calibrated_sensor.json
The calibrated_sensor.json
file contains calibration data for each sensor used in the dataset. Definition of a particular sensor (lidar/radar/camera) as calibrated on a particular vehicle. All extrinsic parameters are given with respect to the ego vehicle body frame. All camera images come undistorted and rectified.
category.json
The category.json
file contains metadata about the categories of objects in the dataset. Taxonomy of object categories (e.g. vehicle, human). Subcategories are delineated by a period (e.g. human.pedestrian.adult).
ego_pose.json
Ego vehicle pose at a particular timestamp. Given with respect to global coordinate system of the log's map. The ego_pose is the output of a lidar map-based localization algorithm described in our paper. The localization is 2-dimensional in the x-y plane.
instance.json
The instance.json
file contains metadata about each instance in the dataset. An object instance, e.g. particular vehicle. This table is an enumeration of all object instances we observed. Note that instances are not tracked across scenes, only inside a given scene.
log.json
The log.json
file contains information about the log from which the data was extracted.
map.json
The map.json
file contains metadata about the map data that is stored as binary semantic masks from a top-down view.
sample.json
The sample.json
file contains metadata about each sample in the dataset. A sample is an annotated keyframe at 2 Hz. The data is collected at (approximately) the same timestamp as part of a single LIDAR sweep.
sample_annotation.json
The sample_annotation.json
file contains detailed information about each annotation in the dataset. A bounding box defining the position of an object seen in a sample. All location data is given with respect to the global coordinate system.
sample_data.json
The sample_data.json
file contains metadata about each data sample in the dataset. A sensor data e.g. image, point cloud or radar return. For sample_data with is_key_frame=True, the time-stamps should be very close to the sample it points to. For non key-frames the sample_data points to the sample that follows closest in time.
scene.json
The scene.json
file contains metadata about each scene in the dataset. A scene is a 20s long sequence of consecutive frames extracted from a log. Multiple scenes can come from the same log. Note that object identities (instance tokens) are not preserved across scenes.
sensor.json
The sensor.json
file specifies sensor types.
visibility.json
The visibility of an instance is the fraction of annotation visible in all 6 images. Binned into 4 bins 0-40%, 40-60%, 60-80% and 80-100%.
The nuScenes format description can be found
x
The x coordinate of the point.
y
The y coordinate of the point.
z
The z coordinate of the point.
i
Intensity of the return signal.
r
Ring index.