Experiments
Last updated
Was this helpful?
Last updated
Was this helpful?
Was this helpful?
Supervisely records all your training experiments and provides an organized way to manage them, which makes working with your trained models easier and more efficient. You can find your experiments in the Experiments section of the right sidebar in the Supervisely platform. Here, you can see a table with your experiments and their details, such as model and framework, training data, evaluation metrics with the full evaluation report, tensorboard logs, and other information.
The Experiments table serves as a complete historical record of your team's ML journey, capturing:
Exact model configurations and hyperparameters
Training dataset versions and preprocessing steps
Evaluation metrics with detailed visualizations in the full report
Tensorboard logs with training charts, losses, and other metrics
Environment details and framework versions
You can apply your model directly from the experiments table by clicking on the button with "fire" icon. This will launch the Predict App where you can select a project or dataset and configure inference settings.
Find more options in the 3 dots menu of an experiment:
Training Session: This opens the training session where the training process was run.
Show hyperparameters: This shows the hyperparameters used in the experiment.
Finetune: Allows you to finetune your model on new data.
Train New: Start a new training experiment with the same model and hyperparameters. This allows you to quickly reproduce experiments without manually configuring everything again.
Deploy: Launches a serving app where the model will be deployed for inference.
Show logs: Opens the app's logs of the training session.
Files: Opens the result files of the training in Team Files, including model weights, evaluation reports, configuration files, and other artifacts.
Download folder: Downloads ZIP archive with all the result files of the experiment.
You can click on any experiment in the table to open its details and view all the information about the experiment.
It includes the following sections:
Training Information: The experiment name, model, framework, computer vision task, device, training duration, base checkpoint in case of transfer learning.
Training Data: The dataset and project used for training, as well as the number of images in the training and validation sets.
Evaluation Metrics: The evaluation metrics of the model, such as accuracy, precision, recall, F1 score, and other metrics specific to the computer vision task. You can also open the full evaluation report with detailed metrics and visualizations.
Hyperparameters: The hyperparameters used for training, such as learning rate, batch size, number of epochs, and other parameters specific to the training framework.
Predictions: Allows you to view the predictions made by the model on the validation dataset.
Training Logs: You can analyze the training charts with losses and metrics, or open Tensorboard dashboard from here.
Checkpoints: You can view and manage the checkpoints created during the training process, as well as the best checkpoint based on the evaluation.
Code examples & API usage: You can find all the code examples and usage instructions for the model in the experiment details page. This allows you to quickly understand how to use the model inside and outside Supervisely platform.
You can also do quick actions with the model:
Apply Model: Launches the Predict App where you can select a project/dataset and configure inference settings.
Deploy: Launches a serving app where the model will be deployed for inference.
Train New: Start a new training experiment with the same model and hyperparameters, but with different data or settings. This allows you to quickly reproduce experiments without manually configuring everything again.
Finetune: You can finetune your model on new data to improve its performance or to have it applied for another downstream task. This allows you to continue training your model without starting from scratch.
Download Model: Click on the "arrow down" button to download the model files, including the model weights and configuration. This allows you to use the model outside of Supervisely platform or in other applications.
You can start a new experiment by clicking the Start button. This will open a wizard where you can set up the experiment configuration, such as the model, training data, hyperparameters, and other settings. Once you have configured the experiment, the training app will be launched, and you can start the training process.
The configuration wizard includes the following steps:
Choose Computer Vision Task: Select the type of computer vision task you solve, such as classification, object detection, segmentation, etc. This will narrow down the list of available models and frameworks to choose from.
Framework: Select the framework you want to use for training. A framework is a codebase that implements specific model architectures and training algorithms. Each framework typically offers multiple model variants (like different sizes or configurations) built on the same core design and training methodology.
Model: Choose a model architecture you want to train. The available models are shown in a table with their details, such as the number of parameters, metrics on some benchmark datasets (such as COCO).
Training Data: Select a project for training. All projects in your team will be shown in the table.
Classes: Select the classes you want to train the model on. Some classes can be automatically converted to the needed format. For example, bitmap masks can be converted to bounding boxes for training an object detection model.
Train/Validation Split: Configure how the training and validation datasets are split. You can choose to split either by datasets, by collections, or randomly.
Hyperparameters: Set the hyperparameters for training, such as learning rate, batch size, etc. Parameters are specific to the framework you selected.
Evaluation & Speed test: You can enable or disable the final evaluation of the best checkpoint on the validation dataset after training. This will generate a full evaluation report with detailed metrics and visualizations. Additionally, you can enable the speed test to measure the inference speed of the model. This will provide you with metrics like FPS and latency on your hardware.
Model Export: Choose which formats you want to export the trained model to. ONNX is a widely supported format that can be used in various frameworks and platforms, while TensorRT is optimized for NVIDIA GPUs and provides high performance for real-time applications.
GPU: Select a connected machine with a GPU to run the training on. You need to connect your machine to Supervisely platform first. See how to connect an agent.
Finalization: You can optionally set the name of the experiment and start the training process. The training app will be launched, and you can monitor the training progress in the application.
You can compare the evaluation results of different experiments by (1) selecting them in the experiments table, then go in the (2) selected -> Compare Model Evaluation. This will open a comparison page where you can see the full evaluation reports of the selected experiments side-by-side.
You can also view the tensorboard logs of the selected experiments by clicking (2) selected -> Compare Training Metrics.
You can deploy your trained models for inference directly from the experiments table. This allows you to quickly apply your models to new data and start making predictions. To apply your model, hover on the experiment in the table and click on the button with "fire" icon. To deploy your model, click on the "3 dots" menu of an experiment and select "Deploy". This will launch a serving app where the model will be deployed.
You can finetune your trained model on new data by clicking the Finetune button in "3 dots" menu of an experiment. This will launch a training app with the same model and hyperparameters, but with different data. This allows you to continue training your model without starting from scratch.