Supervisely
AboutAPI ReferenceSDK Reference
  • 🤖What's Supervisely
  • 🚀Ecosystem of Supervisely Apps
  • 💡FAQ
  • 📌Getting started
    • How to import
    • How to annotate
    • How to invite team members
    • How to connect agents
    • How to train models
  • 🔁Import and Export
    • Import
      • Overview
      • Import using Web UI
      • Supported annotation formats
        • Images
          • 🤖Supervisely JSON
          • 🤖Supervisely Blob
          • COCO
          • Yolo
          • Pascal VOC
          • Cityscapes
          • Images with PNG masks
          • Links from CSV, TXT and TSV
          • PDF files to images
          • Multiview images
          • Multispectral images
          • Medical 2D images
          • LabelMe
          • LabelStudio
          • Fisheye
          • High Color Depth
        • Videos
          • Supervisely
        • Pointclouds
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 3D
        • Pointcloud Episodes
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 360
        • Volumes
          • Supervisely
          • .NRRD, .DCM volumes
          • NIfTI
      • Import sample dataset
      • Import into an existing dataset
      • Import using Team Files
      • Import from Cloud
      • Import using API & SDK
      • Import using agent
    • Migrations
      • Roboflow to Supervisely
      • Labelbox to Supervisely
      • V7 to Supervisely
      • CVAT to Supervisely
    • Export
  • 📂Data Organization
    • Core concepts
    • MLOps Workflow
    • Projects
      • Datasets
      • Definitions
      • Collections
    • Team Files
    • Disk usage & Cleanup
    • Quality Assurance & Statistics
      • Practical applications of statistics
    • Operations with Data
      • Data Filtration
        • How to use advanced filters
      • Pipelines
      • Augmentations
      • Splitting data
      • Converting data
        • Convert to COCO
        • Convert to YOLO
        • Convert to Pascal VOC
    • Data Commander
      • Clone Project Meta
  • 📝Labeling
    • Labeling Toolboxes
      • Images
      • Videos 2.0
      • Videos 3.0
      • 3D Point Clouds
      • DICOM
      • Multiview images
      • Fisheye
    • Labeling Tools
      • Navigation & Selection Tools
      • Point Tool
      • Bounding Box (Rectangle) Tool
      • Polyline Tool
      • Polygon Tool
      • Brush Tool
      • Mask Pen Tool
      • Smart Tool
      • Graph (Keypoints) Tool
      • Frame-based tagging
    • Labeling Jobs
      • Labeling Queues
      • Labeling Consensus
      • Labeling Statistics
    • Labeling with AI-Assistance
  • 🤝Collaboration
    • Admin panel
      • Users management
      • Teams management
      • Server disk usage
      • Server trash bin
      • Server cleanup
      • Server stats and errors
    • Teams & workspaces
    • Members
    • Issues
    • Guides & exams
    • Activity log
    • Sharing
  • 🖥️Agents
    • Installation
      • Linux
      • Windows
      • AMI AWS
      • Kubernetes
    • How agents work
    • Restart and delete agents
    • Status and monitoring
    • Storage and cleanup
    • Integration with Docker
  • 🔮Neural Networks
    • Overview
    • Inference & Deployment
      • Overview
      • Supervisely Serving Apps
      • Deploy & Predict with Supervisely SDK
      • Using trained models outside of Supervisely
    • Model Evaluation Benchmark
      • Object Detection
      • Instance Segmentation
      • Semantic Segmentation
      • Custom Benchmark Integration
    • Custom Model Integration
      • Overview
      • Custom Inference
      • Custom Training
    • Legacy
      • Starting with Neural Networks
      • Train custom Neural Networks
      • Run pre-trained models
  • 👔Enterprise Edition
    • Get Supervisely
      • Installation
      • Post-installation
      • Upgrade
      • License Update
    • Kubernetes
      • Overview
      • Installation
      • Connect cluster
    • Advanced Tuning
      • HTTPS
      • Remote Storage
      • Single Sign-On (SSO)
      • CDN
      • Notifications
      • Moving Instance
      • Generating Troubleshoot Archive
      • Storage Cleanup
      • Private Apps
      • Data Folder
      • Firewall
      • HTTP Proxy
      • Offline usage
      • Multi-disk usage
      • Managed Postgres
      • Scalability Tuning
  • 🔧Customization and Integration
    • Supervisely .JSON Format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Tags
      • Objects
      • Single-Image Annotation
      • Single-Video Annotation
      • Point Cloud Episodes
      • Volumes Annotation
    • Developer Portal
    • SDK
    • API
  • 💡Resources
    • Changelog
    • GitHub
    • Blog
    • Ecosystem
Powered by GitBook
On this page
  • What is Smart Tool?
  • Availability of SmartTool
  • Step-by-Step Usage Guide
  • Step 1. Open the Labeling Interface
  • Step 2. Select the Neural Network Model
  • Step 3. Create a Bounding Box
  • Step 4. Adjust the Mask with Points
  • Step 5. Refine the Segmentation (Optional)
  • Video Tutorial
  • SmartTool Deployment
  • Working with Models
  • Learn More
  • Working with Custom Data
  • Different modalities

Was this helpful?

  1. Labeling

Labeling with AI-Assistance

Disrupt accustomed approaches and boost both labeling performance and quality with the help of interactive smart tools.

What is Smart Tool?

Smart Tool is a powerful interactive segmentation tool. This powerful tool uses the latest artificial intelligence (AI) technologies to create highly accurate object masks, significantly speeding up the annotation process. With seamless integration of several state-of-the-art neural network models, including Segment Anything 2 (SAM 2), ClickSEG and others, Smart Tool easily adapts to a wide range of use cases and domains.

Availability of SmartTool

Access to Smart Tool features depends on your Supervisely plan. In the Community version you get access to an already running Smart Tool that works with the ClickSEG or Segment Anything 2 (SAM 2) model.

Step-by-Step Usage Guide

Step 1. Open the Labeling Interface

  1. Navigate to your project and open an Image or Video Labeling Tool.

  2. Locate and select the Smart Tool from the toolbar on the left side of the interface.

Step 2. Select the Neural Network Model

  1. Click on the Model ID button in the left toolbar.

  2. Choose a neural network model, such as Segment Anything 2 (SAM 2), or ClickSEG, depending on your project requirements. The selected model will be activated for segmentation tasks.

Step 3. Create a Bounding Box

  1. Identify the object you want to segment in the image.

  2. Click and drag to draw a bounding box around the object.

Ensure that the box includes the object with a 10% padding for better segmentation accuracy.

Step 4. Adjust the Mask with Points

  1. Place positive points (🟢) on areas of the object you want to include in the mask.

  2. Place negative points (🔴) on areas you want to exclude (e.g., background or overlapping objects).

  3. Adjust the mask by adding or moving points until the segmentation aligns with the object’s boundaries.

  4. Press the SPACE key to finalize the segmentation mask. The mask will be saved as a labeled object in the project.

Step 5. Refine the Segmentation (Optional)

  1. Use the Brush, Eraser, or Pen tools for manual adjustments:

    • Brush: Add to the mask by painting over missing areas.

    • Eraser: Remove unwanted parts of the mask.

    • Pen: Draw precise boundaries around the object.

  2. Press and hold the SHIFT key while using the tools to make additional edits.

Switch Models for Improved Results (Optional)

  1. If the current model doesn’t produce satisfactory results, return to the Model ID button.

  2. Select a different model and repeat the segmentation steps.

  3. Experiment with multiple models to achieve the best performance for your specific dataset.

Video Tutorial

Master the Smart Tool with our detailed 5-minute video tutorial.

SmartTool Deployment

To successfully deploy and run SmartTool models on your own server, you must have a GPU-enabled compute agent. This ensures optimal performance and efficiency of the tool.

Note: Using a GPU Compute Agent is an important requirement for SmartTool models to function correctly.

Working with Models

  • Pre-trained Models: Supervisely provides access to popular models like ClickSEG, RITM, and Segment Anything, ensuring a seamless user experience.

  • Custom Models: Tailor models to your unique data directly on the platform, with no coding required. Example use cases include agricultural image segmentation and road crack detection.

  • Enterprise Customization: Private instance administrators can configure preferred models via an intuitive administration panel, enabling enterprise-level flexibility.

Learn More

In Supervisely, you have the capability to train the Smart Tool specifically for your data directly within the platform, and the best part is, that no coding is required! We're actively experimenting with this feature and are impressed by the results. Feel free to explore other blog posts dedicated to this topic.

Working with Custom Data

Apart from SmartTool, we have access to a variety of applications tailored for different data handling purposes:

  • SmartTool: Primarily used for image annotation and labeling.

Moreover, we provide various training applications:

These tools use training of specific models like RITM, YOLOv8, UNet, and MMDetection, optimizing the image annotation process across diverse data types.

Different modalities

Another substantial thing about neural networks is that it's easy to adapt it to different modalities. That means, that the Smart Tools not only work on images, but on sequential frames, such as videos or multi-slice medial imaging and even 3D point clouds with more than two dimensions!

Please check the those awesome tutorials and guides:

PreviousLabeling StatisticsNextAdmin panel

Last updated 5 months ago

Was this helpful?

We also provide detailed guide on to make setting up and using SmartTool easier.

Enables the application of neural networks to image projects directly.

Any NN can be integrated into Labeling interface if it has properly implemented serving app (for example: Serve YOLOv5).

📝
how to deploy your own computer agent
How to Train Smart Tool for Precise Cracks Segmentation in Industrial Inspection
Automate manual labeling with custom interactive segmentation model for agricultural images
Unleash The Power of Domain Adaptation - How to Train Perfect Segmentation Model on Synthetic Data with HRDA
Lessons Learned From Training a Segmentation Model On Synthetic Data
Apply NN to Images Project:
NN Image Labeling:
Train RITM
Train YOLOv8
Train UNet
Train MMDetection
and other
Automate manual labeling with custom interactive segmentation model for agricultural images
Segment Anything in High Quality (HQ-SAM): a new Foundation Model for Image Segmentation (Tutorial)
How to Train Smart Tool for Precise Cracks Segmentation in Industrial Inspection
Complete Guide to Object Tracking: Best AI Models, Tools and Methods in 2023
Cover

Images

Cover

Video

Cover

3D Point Cloud

Cover

DICOM