Multiview videos
Annotate multiple synchronized videos from different camera angles as a unified scene with shared objects and synchronized playback.
Overview
Multiview mode allows you to work with multiple videos of the same scene captured from different cameras or angles simultaneously. All videos within one dataset form a synchronized multiview group, enabling efficient annotation of complex multi-camera setups.
This is particularly useful for:
Autonomous driving — multiple cameras mounted on a vehicle
Surveillance — security cameras covering the same area from different angles
Sports analytics — tracking athletes from multiple viewpoints
3D reconstruction — multi-camera setups for depth estimation

Key Concepts
Unified Objects
If you have a common object appearing across different videos (e.g., a car visible from front, left, and right cameras), you can annotate it as a single Supervisely object with a shared ID.
The object maintains its identity across all videos in the multiview group
When you export and re-import the project, the object is recreated as a unified entity
Synchronized Playback
All videos in a multiview group can be played back synchronously:
Navigate through frames simultaneously across all views
Configure frame offsets if videos have different starting points
Control the order of videos in the labeling interface using metadata files with
videoStreamIndex(see "How to Create Multiview Project" section below)Easily compare and analyze object behavior from multiple angles in real-time
How to Create Multiview Project
Via UI (Drag & Drop)
Prepare your data according to the required structure (see below):
Create a folder for each multiview group (dataset)
Place all camera videos inside the corresponding folder

Go to your workspace and start from creating a new project (
+ New⇨New Project Wizard)Select
Videos⇨Multi-viewlabeling interface and proceedPress
Importto open the import app wizardDrag and drop your prepared folder or archive
The import app will automatically group videos by datasets
In this example we use videos from the MAVREC Dataset (CC BY 4.0) annotated by Supervisely team.
For detailed import format specification, see Multiview Import Format.
Via Python SDK
Open the project in the Supervisely UI to start annotating in multiview mode.

Optional: Specify Video Order
To control the order of videos in the multiview labeling interface, you can create a metadata JSON file for each video with the suffix .meta.json. This file should include the videoStreamIndex to define its position. For example, for a video named aerial_scene_1.mp4, create a file named aerial_scene_1.mp4.meta.json with the following content:
Labeling in Multiview Mode
Interface Overview
The multiview labeling interface includes the following key elements:
Video panels & Synchronized timeline — each video is displayed in its own panel with a shared timeline for synchronized navigation:
Objects & Tags panel — shows objects across all videos and video-specific tags:

Multiview settings — configure frame offsets to align videos temporally to ensure synchronized playback. You can specify number of frames or time in milliseconds for each video:

Annotating Objects
Create an object on any video using annotation tools (rectangle, polygon, etc.)
The same object can be annotated on other videos in the group
Objects with the same ID are linked across all videos
Video-specific Tags
Unlike objects, tags apply only within a specific video:
When you tag a figure, frame, or video, that tag is associated only with that particular video
Tags are displayed only on the video where they were created
This allows for view-specific annotations (e.g., "occluded" tag on one camera angle)
Auto-tracking
To speed up the annotation process, you can use the Auto Track app to automatically track objects across multiple videos simultaneously.
To use auto-tracking in multiview mode:
Open a multiview video project and navigate to the desired frame.
Configure the tracking settings in the tracking tool:
Set number of frames to track
Choose direction (forward/backward)
Select the tracking engine (Auto Track app)
Enable/Disable automatic tracking

Annotate the object on one of the videos and start the tracker.
After tracking is complete on one video, switch to another video in the multiview interface.
Create a new figure for the same object and press
Alt + Space(orOption + Spaceon Mac) to complete the figure. This will trigger the tracker to extend the annotation for the same number of frames as before.
Simple example result of auto-tracking in multiview mode:
Export
Use the Export Videos in Supervisely Format app to export your multiview project:
All videos are exported with their annotations in Supervisely format
Object relationships across videos are preserved via shared object keys
Metadata files contain
videoStreamIndexfor maintaining video orderProject meta include settings for multiview configuration
The exported structure can be re-imported to recreate the exact same multiview setup.
Use Cases
Autonomous Driving
Annotate objects visible from front, rear, and side cameras as unified entities
Surveillance
Track people or vehicles across multiple security cameras
Sports Analytics
Follow athletes from different camera angles for comprehensive analysis
Retail Analytics
Monitor customer behavior from multiple store cameras
3D Reconstruction
Annotate corresponding points across stereo or multi-camera setups
Useful Links
Last updated