Supervisely
AboutAPI ReferenceSDK Reference
  • 🤖What's Supervisely
  • 🚀Ecosystem of Supervisely Apps
  • 💡FAQ
  • 📌Getting started
    • How to import
    • How to annotate
    • How to invite team members
    • How to connect agents
    • How to train models
  • 🔁Import and Export
    • Import
      • Overview
      • Import using Web UI
      • Supported annotation formats
        • Images
          • 🤖Supervisely JSON
          • 🤖Supervisely Blob
          • COCO
          • Yolo
          • Pascal VOC
          • Cityscapes
          • Images with PNG masks
          • Links from CSV, TXT and TSV
          • PDF files to images
          • Multiview images
          • Multispectral images
          • Medical 2D images
          • LabelMe
          • LabelStudio
          • Fisheye
          • High Color Depth
        • Videos
          • Supervisely
        • Pointclouds
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 3D
        • Pointcloud Episodes
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 360
        • Volumes
          • Supervisely
          • .NRRD, .DCM volumes
          • NIfTI
      • Import sample dataset
      • Import into an existing dataset
      • Import using Team Files
      • Import from Cloud
      • Import using API & SDK
      • Import using agent
    • Migrations
      • Roboflow to Supervisely
      • Labelbox to Supervisely
      • V7 to Supervisely
      • CVAT to Supervisely
    • Export
  • 📂Data Organization
    • Core concepts
    • MLOps Workflow
    • Projects
      • Datasets
      • Definitions
      • Collections
    • Team Files
    • Disk usage & Cleanup
    • Quality Assurance & Statistics
      • Practical applications of statistics
    • Operations with Data
      • Data Filtration
        • How to use advanced filters
      • Pipelines
      • Augmentations
      • Splitting data
      • Converting data
        • Convert to COCO
        • Convert to YOLO
        • Convert to Pascal VOC
    • Data Commander
      • Clone Project Meta
  • 📝Labeling
    • Labeling Toolboxes
      • Images
      • Videos 2.0
      • Videos 3.0
      • 3D Point Clouds
      • DICOM
      • Multiview images
      • Fisheye
    • Labeling Tools
      • Navigation & Selection Tools
      • Point Tool
      • Bounding Box (Rectangle) Tool
      • Polyline Tool
      • Polygon Tool
      • Brush Tool
      • Mask Pen Tool
      • Smart Tool
      • Graph (Keypoints) Tool
      • Frame-based tagging
    • Labeling Jobs
      • Labeling Queues
      • Labeling Consensus
      • Labeling Statistics
    • Labeling with AI-Assistance
  • 🤝Collaboration
    • Admin panel
      • Users management
      • Teams management
      • Server disk usage
      • Server trash bin
      • Server cleanup
      • Server stats and errors
    • Teams & workspaces
    • Members
    • Issues
    • Guides & exams
    • Activity log
    • Sharing
  • 🖥️Agents
    • Installation
      • Linux
      • Windows
      • AMI AWS
      • Kubernetes
    • How agents work
    • Restart and delete agents
    • Status and monitoring
    • Storage and cleanup
    • Integration with Docker
  • 🔮Neural Networks
    • Overview
    • Inference & Deployment
      • Overview
      • Supervisely Serving Apps
      • Deploy & Predict with Supervisely SDK
      • Using trained models outside of Supervisely
    • Model Evaluation Benchmark
      • Object Detection
      • Instance Segmentation
      • Semantic Segmentation
      • Custom Benchmark Integration
    • Custom Model Integration
      • Overview
      • Custom Inference
      • Custom Training
    • Legacy
      • Starting with Neural Networks
      • Train custom Neural Networks
      • Run pre-trained models
  • 👔Enterprise Edition
    • Get Supervisely
      • Installation
      • Post-installation
      • Upgrade
      • License Update
    • Kubernetes
      • Overview
      • Installation
      • Connect cluster
    • Advanced Tuning
      • HTTPS
      • Remote Storage
      • Single Sign-On (SSO)
      • CDN
      • Notifications
      • Moving Instance
      • Generating Troubleshoot Archive
      • Storage Cleanup
      • Private Apps
      • Data Folder
      • Firewall
      • HTTP Proxy
      • Offline usage
      • Multi-disk usage
      • Managed Postgres
      • Scalability Tuning
  • 🔧Customization and Integration
    • Supervisely .JSON Format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Tags
      • Objects
      • Single-Image Annotation
      • Single-Video Annotation
      • Point Cloud Episodes
      • Volumes Annotation
    • Developer Portal
    • SDK
    • API
  • 💡Resources
    • Changelog
    • GitHub
    • Blog
    • Ecosystem
Powered by GitBook
On this page
  • Step 1. Prepare training data
  • Step 2. Deploy an agent
  • Step 3. Train a model
  • Step 4. Deploy a trained model
  • Step 5. Get predictions
  • Step 6. Export weights

Was this helpful?

  1. Getting started

How to train models

Learn how to use Supervisely Apps to train custom AI models, deploy them on your GPU and use in the labeling toolboxes

PreviousHow to connect agentsNextImport

Last updated 10 months ago

Was this helpful?

This 5-minute tutorial is a part of introduction to Supervisely series. You can complete them one-by-one, in random order, or jump to the rest of the documentation at any moment.

  • How to train models (you are here)

To learn more about training a model on your custom data and pre-labeling (getting predictions from) your images with the trained models, .

We will provide a step-by-step guide for training a custom model, using as an example. Supervisely offers a no-code solution for training, deploying and predicting with models directly in your web browser, leveraging user-friendly interfaces and integrated tools.

Step 1. Prepare training data

You have several options for preparing your training data:

  • your images, them, and then train a custom neural network model.

We recommend starting experiments with several hundred images. Continuously improve your object detection neural network by adding new images, especially those where the model's accuracy is lower.

  • Import your (such as and ) and try to build neural network model directly on your custom data.

  • Pick we prepared for you and reproduce this tutorial from start to end.

Step 2. Deploy an agent

Ensure no network configuration is needed, and the connection is secure and private.

Step 3. Train a model

  1. Follow the wizard to configure the main training settings, similar to those allowed by the original repository. You can:

  • Choose all or a subset of classes for training.

  • Define training and validation splits.

  • Select one of the available model architectures.

  • Configure training hyperparameters, including augmentations.

  1. Press the Train button and monitor logs, charts and visualizations in real-time.

  1. The training process generates artifacts, including model weights (checkpoints), logs, charts, additional visualizations of training batches, predictions on validation data, precision-recall curves, confusion matrices and so on. These artifacts will be automatically saved to your Team Files.

Step 4. Deploy a trained model

Once the model is trained, you probably want to try it on your data and evaluate its performance.

  1. Provide the checkpoint (model weights file in .pt format) and follow the app's instructions.

Step 5. Get predictions

Option 1. Integrate model in Labeling Interface

This approach gives you the ability to automatically pre-label images and then just manually correct model mistakes

Option 2. Apply model to all images at once

The app will iterate over all images in your project, apply your model in a batch manner, and save all predicted labels to a new project.

Step 6. Export weights

The trained model can be easily exported and used outside the platform. Go to the directory with training artifacts in your Team Files and download the model weights in PyTorch (.pt) format for external use.

Here is a Python example of inference:

from ultralytics import YOLO

# Load your model
model = YOLO("my_checkpoint.pt")

# Predict on an image
results = model("/a/b/c/my_image.jpg") 

# Process results list
for result in results:
    boxes = result.boxes  # Boxes object for bbox outputs
    masks = result.masks  # Masks object for segmentation masks outputs
    keypoints = result.keypoints  # Keypoints object for pose outputs
    

You can check the main section of the documentation on neural networks:

Before you start training or running neural networks, you need to connect your PC or a cloud server with a GPU to Supervisely by running a simple command in your terminal. This connection allows you to train neural networks and run inference directly from the Supervisely web interface. You can find detailed instructions on how to do this .

Open the training app from your labeled data project, click the [⫶] button → Neural Networks → YOLO → .

Use the app to deploy your model as a REST API service so it can receive images and return predictions in response.

In Supervisely you can quickly deploy custom or pretrained neural network models weights on your GPU using the in just a few clicks.

Use the app to apply your model to images or regions of interest during annotation, configure inference settings like confidence thresholds or select all or several model classes.

Use the app to pre-label all images in a project. Follow the wizard to configure settings and run batch inference (connect to the model, select model classes, configure inference settings, and preview predictions).

Now you can follow the to get predictions on images.

📌
How to import
How to annotate
How to invite team members
How to connect agents
watch this video tutorial
YOLO (v8, v9)
YOLO (v8, v9)
Upload
label
existing training dataset
COCO
YOLOv5
ready-to-use data
here
Train YOLO (v8, v9)
Serve YOLO (v8, v9)
Serve Supervisely Applications
NN Image Labeling
Apply NN to Images Project
YOLOv8 documentation
Overview
How to run the YOLOv8 training App from your custom training dataset
Apply the custom model to all images in your project in a few clicks
Just download the trained model and use it outside the Supevisley platform