Supervisely
AboutAPI ReferenceSDK Reference
  • šŸ¤–What's Supervisely
  • šŸš€Ecosystem of Supervisely Apps
  • šŸ’”FAQ
  • šŸ“ŒGetting started
    • How to import
    • How to annotate
    • How to invite team members
    • How to connect agents
    • How to train models
  • šŸ”Import and Export
    • Import
      • Overview
      • Import using Web UI
      • Supported annotation formats
        • Images
          • šŸ¤–Supervisely JSON
          • šŸ¤–Supervisely Blob
          • COCO
          • Yolo
          • Pascal VOC
          • Cityscapes
          • Images with PNG masks
          • Links from CSV, TXT and TSV
          • PDF files to images
          • Multiview images
          • Multispectral images
          • Medical 2D images
          • LabelMe
          • LabelStudio
          • Fisheye
          • High Color Depth
        • Videos
          • Supervisely
        • Pointclouds
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 3D
        • Pointcloud Episodes
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 360
        • Volumes
          • Supervisely
          • .NRRD, .DCM volumes
          • NIfTI
      • Import sample dataset
      • Import into an existing dataset
      • Import using Team Files
      • Import from Cloud
      • Import using API & SDK
      • Import using agent
    • Migrations
      • Roboflow to Supervisely
      • Labelbox to Supervisely
      • V7 to Supervisely
      • CVAT to Supervisely
    • Export
  • šŸ“‚Data Organization
    • Core concepts
    • Project and Dataset
      • Create
      • Data Structure
      • Define Classes & Tags
      • Gallery & Table views
      • Collections
      • Quality Assurance & Statistics
        • Practical applications of statistics
    • MLOps Workflow
    • Team Files
    • Disk usage & Cleanup
    • Operations with Data
      • Data Filtration
        • How to use advanced filters
      • Pipelines
      • Augmentations
      • Splitting data
      • Converting data
        • Convert to COCO
        • Convert to YOLO
        • Convert to Pascal VOC
    • Data Commander
      • Clone Project Meta
  • šŸ“Labeling
    • Labeling Toolboxes
      • Images
      • Videos 2.0
      • Videos 3.0
      • 3D Point Cloud and Episodes (legacy)
      • 3D Point Cloud and Episodes
      • DICOM
      • Multiview images
      • Fisheye
    • Labeling Tools
      • Navigation & Selection Tools
      • Point Tool
      • Bounding Box (Rectangle) Tool
      • Polyline Tool
      • Polygon Tool
      • Brush Tool
      • Mask Pen Tool
      • Smart Tool
      • Graph (Keypoints) Tool
      • Frame-based tagging
    • Labeling Jobs
      • Labeling Queues
      • Labeling Consensus
      • Labeling Statistics
      • Labeling Quality Control
    • Labeling Performance
    • Labeling with AI-Assistance
  • šŸ¤Collaboration
    • Admin panel
      • Users management
      • Teams management
      • Server disk usage
      • Server trash bin
      • Server cleanup
      • Server stats and errors
    • Teams & workspaces
    • Members
    • Issues
    • Guides & exams
    • Activity log
    • Sharing
  • šŸ–„ļøAgents
    • Installation
      • Linux
      • Windows
      • AMI AWS
      • Kubernetes
    • How agents work
    • Restart and delete agents
    • Status and monitoring
    • Storage and cleanup
    • Integration with Docker
  • šŸ”®Neural Networks
    • Overview
    • Inference & Deployment
      • Overview
      • Supervisely Serving Apps
      • Deploy & Predict with Supervisely SDK
      • Using trained models outside of Supervisely
    • Model Evaluation Benchmark
      • Object Detection
      • Instance Segmentation
      • Semantic Segmentation
      • Custom Benchmark Integration
    • Custom Model Integration
      • Overview
      • Custom Inference
      • Custom Training
    • Solutions
      • Temporal Action Localization with MVD
    • Legacy
      • Starting with Neural Networks
      • Train custom Neural Networks
      • Run pre-trained models
  • šŸ‘”Enterprise Edition
    • Get Supervisely
      • Installation
      • Post-installation
      • Upgrade
      • License Update
    • Kubernetes
      • Overview
      • Installation
      • Connect cluster
    • Advanced Tuning
      • HTTPS
      • Remote Storage
      • Single Sign-On (SSO)
      • CDN
      • Notifications
      • Moving Instance
      • Generating Troubleshoot Archive
      • Storage Cleanup
      • Private Apps
      • Data Folder
      • Firewall
      • HTTP Proxy
      • Offline usage
      • Multi-disk usage
      • Managed Postgres
      • Scalability Tuning
  • šŸ”§Customization and Integration
    • Supervisely .JSON Format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Tags
      • Objects
      • Single-Image Annotation
      • Single-Video Annotation
      • Point Cloud Episodes
      • Volumes Annotation
    • Developer Portal
    • SDK
    • API
  • šŸ’”Resources
    • Changelog
    • GitHub
    • Blog
    • Ecosystem
Powered by GitBook
On this page
  • 1. 3D AI Assistant
  • Interactive 3D Object Detection
  • 3D Point Cloud Ground Segmentation
  • 3D Cuboid Tracking
  • Auto Labeling
  • 2D to 3D Projection
  • 2. Timeline Support
  • 3. Modular and Resizable UI Layout
  • 4. Definitions Panel
  • Summary

Was this helpful?

  1. Labeling
  2. Labeling Toolboxes

3D Point Cloud and Episodes

This article about the new labeling interface for 3D Point Clouds in Supervisely that introduces a significantly enhanced workflow, offering extended functionality and improved usability.

The 3D Point Cloud labeling tool in Supervisely is designed for visualizing, annotating, and managing complex 3D data collected from sensors such as LiDAR and RADAR. It supports key tasks like object detection and segmentation across static scenes and sequential episodes, making it ideal for applications like autonomous driving.

The latest version introduces a completely redesigned interface that unifies both single-frame and episode-based workflows. It brings a more streamlined and powerful experience with features such as:

  • AI-assisted tools for faster and more accurate labeling

  • Interactive 3D Object Detection

  • 3D Point Cloud Ground Segmentation

  • 3D Cuboid Tracking

  • Auto Labeling

  • Synchronized 2D–3D annotation using photo context images

  • Timeline navigation for working with sequential frames

  • Flexible, resizable UI layout tailored to your workflow

  • Definitions Panel for convenient class management and quick object editing

  • Advanced settings for customizing visual styles and display preferences

Together, these enhancements provide an integrated and efficient workspace for working with large-scale 3D datasets.

Difference between 3D Point Cloud and 3D Point Cloud Episodes:

3D Point Cloud: A static representation of a scene captured at a single moment in time.

3D Point Cloud Episodes: A dynamic representation consisting of multiple point clouds collected over time, enabling the analysis of movement and change in the scene.

1. 3D AI Assistant

Supervisely's 3D AI assistant is a universal tool for automating 3D point cloud labeling. It covers all types of labeling scenarios for 3D point clouds: 3D object detection, ground segmentation, 3D cuboid tracking, transfer of 2D annotations from photo context images to original 3D point clouds. This tool is class-agnostic - it means that it works with any type of objects regardless of their shape and point density.

Interactive 3D Object Detection

Select smart tool in left side bar and circle target object. It will automatically generate a 3D cuboid around the selected object.

3D Point Cloud Ground Segmentation

  • Detects and annotates the ground level in the 3D scene.

  • Fits a horizontal surface through point clusters and creates a flat figure with a ground class.

  • Useful for scene normalization and filtering.

Click on auto labeling tab and press "Ground segmentation".

3D Cuboid Tracking

  • After creating an annotation in one frame, the assistant can automatically propagate it across subsequent frames.

  • Helps label dynamic objects in sequential datasets with minimal manual input.

  • Uses a dedicated tracking panel, reusing logic from the video tool.

Auto Labeling

  • Automatically detects and annotates objects using pre-trained models.

  • Simplifies the process of placing cuboids or segmenting regions of interest in the scene.

Click on auto labeling tab and enable "Highlight object by click" option, then select manual cuboid tabeling tools in left sidebar and set cuboid on target object.

2D to 3D Projection

The photo context panel is now an interactive part of the 3D labeling workspace.

You can annotate context images directly using standard image labeling tools. These annotations are automatically synchronized with the 3D space and become part of the same object instance. 2D and 3D annotations now coexist at the same level — edits or creation in one view are instantly reflected in the other. This improves labeling precision and scene understanding, especially when certain features are more visible in 2D.

The system seamlessly combines 2D and 3D perspectives in a single environment — no need to switch tools or views.

Additional capabilities:

  • 2D masks created on photo context images can be automatically converted into 3D geometry.

  • Converted figures are visualized directly in the point cloud view.

  • Currently, only masks are supported. Support for 2D bounding boxes is coming soon.

Click on a photo context image, draw a 2D mask, go to the Auto Labeling tab, and press "Create 3D objects from 2D object on camera."

Note: AI Assistant features are available only to Enterprise customers with the Point Cloud module enabled.

2. Timeline Support

A full timeline component has been added, similar to the one used in video annotation tools:

  • Enables navigation across sequential 3D point cloud frames (episodes).

  • Supports annotation and review of dynamic scenes (episodes) across frame sequences.

  • Provides a comprehensive overview of frame availability, object presence, and annotation density.

3. Modular and Resizable UI Layout

The new interface allows full layout customization:

  • Panels such as photo context, camera views, and definitions can be moved and docked anywhere.

  • Users can arrange the workspace to fit their own workflow and screen space.

  • This flexibility improves usability and efficiency during annotation.

In addition to repositioning view panels, the Settings panel provides advanced customization options — such as adjusting cuboid thickness, customizing class appearance, controlling point cloud display settings, toggling object IDs, and more.

4. Definitions Panel

The Definitions panel is now available in the 3D interface, as in image and video tools:

  • Provides quick access to classes, tags, tool settings, and object styles.

  • Helps manage large taxonomies and maintain consistency across projects.

Editing

To change the class of a selected object:

  1. Click Select Figure tool.

  2. Select the object in any of the view panels.

  3. In the Definition panel, in the row of the selected class, click the mini-icon with two arrows to change the class.

Summary

The updated interface for 3D Point Cloud annotation combines powerful capabilities:

  • Integrated 2D and 3D annotation tools

  • Time-based navigation and frame control

  • Modular UI layout with dockable panels

  • Built-in AI Assistant for autolabeling, tracking, and segmentation

It offers a complete workspace for multi-modal annotation with high accuracy and scalability. Whether working with static point clouds or dynamic 3D sequences, the new tool provides clarity, control, and performance required for modern annotation workflows.

Note: The older version of the 3D Point Cloud tool remains available under legacy status.

  • Users can switch back using the Switch to Legacy Tool button.

  • Legacy version has a static layout and lacks support for definitions, timeline, and 2D–3D synchronization.

  • Further development will focus solely on the new interface.

Previous3D Point Cloud and Episodes (legacy)NextDICOM

Last updated 1 day ago

Was this helpful?

šŸ“