Overview
Intro: Supervisely Neural Network Lifecycle
Supervisely provides a complete ecosystem for developing, training, and deploying neural networks, covering the entire machine learning lifecycle from data preparation to production deployment. Our platform is designed to simplify complex processes of NN model development into a seamless and flexible workflow.
Neural network lifecycle includes:
Data Annotation: Annotate data with advanced labeling tools and AI-assisted features.
Versioning & QA: Manage datasets effectively with AI Search, organize them, and ensure quality control through interactive dashboards and statistics, see Quality Assurance and Statistics.
Model Training: Train state-of-the-art open-source models on your own hardware or in the cloud.
Model Evaluation: Evaluate models with detailed metrics, visualizations, and per-image statistics (check our Detection Evaluation Dashboard).
Track and Compare Models: Track experiments, compare results, and reproduce any run with Supervisely Experiments.
Deploy: Deploy trained models as APIs, Docker containers, or simple Python scripts (see Inference & Deployment).
Predict: Use models for prediction, pre-labeling, and video analysis via Prediction API.
Improve Models: Continuously improve your models through active learning and human-in-the-loop workflows.
Export: Export model weights to multiple formats (PyTorch, ONNX, TensorRT) without vendor lock-in. Your checkpoints can be downloaded and used outside of Supervisely.

Train
In the Supervisely Platform, you can train popular models and frameworks on your data with ease. The platform's modular design allows you to customize training apps to suit your needs and even integrate your own models and architectures. Every training session is tracked in the Experiments table, which records all relevant information about your models, datasets, and evaluation reports.

Train state-of-the-art models: YOLOv8 - YOLOv12, RT-DETRv2, MMDetection, SAM 2.1.
Open source: All apps and Supervisely SDK are open-source, you can customize them to your needs.
No vendor lock: You aren't locked within Supervisely. Use your trained models outside of Supervisely Platform as standalone PyTorch models or with the help of Supervisely SDK.
Model Export: Export models to ONNX, TensorRT formats for optimized inference.
Automate Training: Automate model training via API to handle custom scenarios.
Model & Data Versioning: All your experiments are reproducible. Whenever you start training, a snapshot of your data is created. You can always return to previous versions of your data and models.
Integrate Custom Models: You can integrate your own model architectures for training and inference to be seamlessly integrated into the platform.
Active Learning support: Continuous model improvement by learning from newly added data with efficient sampling strategies.
Workflow charts: Visual dashboard that shows every ML operation โ from data preparation to model deployment โ all in one clear diagram.
Evaluate
In the end of the training, the Model Evaluation Benchmark will be launched automatically for the best checkpoint, and you will be able to examine a detailed evaluation report with performance results. It covers a broad set of metrics, charts, and visualizations.

Auto model evaluation after each training session.
Very detailed evaluation report with a wide set of metrics, charts, and prediction visualizations.
You can launch the evaluation manually for every checkpoint.
Currently supported for Detection, Instance segmentation, Semantic segmentation.
Model Comparison & Versioning
Each model trained in Supervisely is recorded in the Experiments table which displays all your trained models, their metrics, links to train and val datasets, evaluation report, and other useful information. In addition to the evaluation report of a single model, you can generate a comparison report of two or more models allowing you to compare models in as much detail as during the evaluation stage โ it will include various pivot tables, charts and comparison tools. Additionally, you can quickly deploy models or fine-tune them right from the Experiments table.
Read more in Experiments documentation.
The Experiments table with all your trained models and experiments.
Quickly deploy models or fine-tune them right from the Experiments table.
Generate very detailed comparison reports of two or more models, this includes various pivot tables, charts and comparison tools.
Data and model versioning.
Data and model workflow diagrams - to understand, reproduce, track changes of all your ML operations.
Deploy
After your model has been trained, validated, and is ready for use, you can deploy it as API service in different ways:
Supervisely Serving Apps within the Platform. Faster and user-friendly way with convenient web UI.
Deploy & Predict via API: Deploy models and get predictions via API in your python code.
Deploy in Docker: Use Docker containers for deploying your models in a consistent environment.
Using trained models outside of Supervisely: You can always download a plain PyTorch checkpoint and use it outside of Supervisely infrastructure in your code, or download its ONNX / TensorRT exported versions.
There is no vendor lock, you can use your models the same way, as if you trained it manually with your own code.
Predict
After you've deployed a model, you can interact with it in different ways:
Apply model in platform: If you deployed a model via a Supervisely Serving App, you can predict easily with NN Applying app with web UI. Apply NN to Images, Apply NN to Videos.
Predict via API: After deploying a model in Supervisely, you can get predictions via Prediction API in your python code. This is a convenient way to integrate model inference into your existing pipelines.
Predict with CLI arguments: In case you deploying model locally or in a docker container, you can do it with a simple command line interface, and get predictions in a few seconds.
Standalone PyTorch model: You can use your trained checkpoints out of Supervisely infrastructure in your code as a simple pytorch model (see details in Using Standalone PyTorch Models).
With the help of Supervisely SDK and Apps, you can easily apply your model to:
Project or dataset of images.
Folder of images on your machine.
Video. Our SDK automatically breaks video into frames, and the model is applied frame-by-frame.
Tracking by detection. We offer ready-to-use scripts for quickly using your detector for tracking tasks.
3D Lidar data and medical data.
We connect all neural networks together with the use of unified API and data format. Thus, you don't need to write boilerplate code for many use cases, such as, applying a model to a dataset or a video. This allowed us to build a wide ML Ecosystem to cover the entire lifecycle of neural networks: from data labeling to continuous model development.
Last updated