Supervisely
AboutAPI ReferenceSDK Reference
  • 🤖What's Supervisely
  • 🚀Ecosystem of Supervisely Apps
  • 💡FAQ
  • 📌Getting started
    • How to import
    • How to annotate
    • How to invite team members
    • How to connect agents
    • How to train models
  • 🔁Import and Export
    • Import
      • Overview
      • Import using Web UI
      • Supported annotation formats
        • Images
          • 🤖Supervisely JSON
          • 🤖Supervisely Blob
          • COCO
          • Yolo
          • Pascal VOC
          • Cityscapes
          • Images with PNG masks
          • Links from CSV, TXT and TSV
          • PDF files to images
          • Multiview images
          • Multispectral images
          • Medical 2D images
          • LabelMe
          • LabelStudio
          • Fisheye
          • High Color Depth
        • Videos
          • Supervisely
        • Pointclouds
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 3D
        • Pointcloud Episodes
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 360
        • Volumes
          • Supervisely
          • .NRRD, .DCM volumes
          • NIfTI
      • Import sample dataset
      • Import into an existing dataset
      • Import using Team Files
      • Import from Cloud
      • Import using API & SDK
      • Import using agent
    • Migrations
      • Roboflow to Supervisely
      • Labelbox to Supervisely
      • V7 to Supervisely
      • CVAT to Supervisely
    • Export
  • 📂Data Organization
    • Core concepts
    • Project and Dataset
      • Create
      • Data Structure
      • Define Classes & Tags
      • Gallery & Table views
      • Collections
      • Quality Assurance & Statistics
        • Practical applications of statistics
    • MLOps Workflow
    • Team Files
    • Disk usage & Cleanup
    • Operations with Data
      • Data Filtration
        • How to use advanced filters
      • Pipelines
      • Augmentations
      • Splitting data
      • Converting data
        • Convert to COCO
        • Convert to YOLO
        • Convert to Pascal VOC
    • Data Commander
      • Clone Project Meta
  • 📝Labeling
    • Labeling Toolboxes
      • Images
      • Videos 2.0
      • Videos 3.0
      • 3D Point Cloud and Episodes (legacy)
      • 3D Point Cloud and Episodes
      • DICOM
      • Multiview images
      • Fisheye
    • Labeling Tools
      • Navigation & Selection Tools
      • Point Tool
      • Bounding Box (Rectangle) Tool
      • Polyline Tool
      • Polygon Tool
      • Brush Tool
      • Mask Pen Tool
      • Smart Tool
      • Graph (Keypoints) Tool
      • Frame-based tagging
    • Labeling Jobs
      • Labeling Queues
      • Labeling Consensus
      • Labeling Statistics
      • Labeling Quality Control
    • Labeling Performance
    • Labeling with AI-Assistance
  • 🤝Collaboration
    • Admin panel
      • Users management
      • Teams management
      • Server disk usage
      • Server trash bin
      • Server cleanup
      • Server stats and errors
    • Teams & workspaces
    • Members
    • Issues
    • Guides & exams
    • Activity log
    • Sharing
  • 🖥️Agents
    • Installation
      • Linux
      • Windows
      • AMI AWS
      • Kubernetes
    • How agents work
    • Restart and delete agents
    • Status and monitoring
    • Storage and cleanup
    • Integration with Docker
  • 🔮Neural Networks
    • Overview
    • Inference & Deployment
      • Overview
      • Supervisely Serving Apps
      • Deploy & Predict with Supervisely SDK
      • Using trained models outside of Supervisely
    • Model Evaluation Benchmark
      • Object Detection
      • Instance Segmentation
      • Semantic Segmentation
      • Custom Benchmark Integration
    • Custom Model Integration
      • Overview
      • Custom Inference
      • Custom Training
    • Solutions
      • Temporal Action Localization with MVD
    • Legacy
      • Starting with Neural Networks
      • Train custom Neural Networks
      • Run pre-trained models
  • 👔Enterprise Edition
    • Get Supervisely
      • Installation
      • Post-installation
      • Upgrade
      • License Update
    • Kubernetes
      • Overview
      • Installation
      • Connect cluster
    • Advanced Tuning
      • HTTPS
      • Remote Storage
      • Single Sign-On (SSO)
      • CDN
      • Notifications
      • Moving Instance
      • Generating Troubleshoot Archive
      • Storage Cleanup
      • Private Apps
      • Data Folder
      • Firewall
      • HTTP Proxy
      • Offline usage
      • Multi-disk usage
      • Managed Postgres
      • Scalability Tuning
  • 🔧Customization and Integration
    • Supervisely .JSON Format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Tags
      • Objects
      • Single-Image Annotation
      • Single-Video Annotation
      • Point Cloud Episodes
      • Volumes Annotation
    • Developer Portal
    • SDK
    • API
  • 💡Resources
    • Changelog
    • GitHub
    • Blog
    • Ecosystem
Powered by GitBook
On this page
  • Overview
  • Format description
  • Input files structure
  • Upload annotations separately
  • Useful links

Was this helpful?

  1. Import and Export
  2. Import
  3. Supported annotation formats
  4. Volumes

NIfTI

Overview

This converter allows you to import NIfTI files into a Supervisely project. It also supports annotations in the NIfTI format (.nii and .nii.gz).

The converter supports both semantic and instance segmentation annotations, as well as import of volumes with no annotations. We will provide an examples of the input structure below.

The converter is backwards compatible with the Export volume project to cloud storage application.

All volumes from the input directory and its subdirectories will be uploaded to a single dataset

Format description

Supported image formats: .nii, .nii.gz. With annotations: Yes (semantic and instance segmentation). Supported annotation format: .nii, .nii.gz. Data structure: Information is provided below.

Input files structure

Example 1: grouped by volume name

The NIfTI file should be structured as follows:

📂 dataset_name # ⬅︎ may be archive, root files or nested directory instead
├── 📂 CTChest          # ⬅︎ the same name as the volume name
│   │   # ⬇︎ this directory contains annotations for the CTChest volume
│   ├── 🩻 lung.nii.gz
│   └── 🩻 tumor.nii.gz
├── 🩻 CTChest.nii.gz
└── 🩻 Spine.nii.gz     # ⬅︎ this volume has no annotations

If the volume has annotations, they should be in the corresponding directory with the same name as the volume (e.g. CTChest, without extension).

Annotation files should be named according to the following pattern:

  • Name of the class (e.g. lung, tumor) + .nii or .nii.gz.

  • The class name should be unique for the current volume (e.g. tumor.nii.gz, lung.nii.gz).

  • Annotation files can contain multiple objects of the same class (each object should be represented by a different value in the mask).

Example 2: grouped by plane

The NIfTI file should be structured as follows:

For semantic segmentation:

  • The filename must contain one of the required plane identifiers: axl, cor, or sag, anywhere in the name.

  • The file representing the anatomical volume should have anatomical string in it's filename representing the volume type.

  • The annotation file for all classes should also include the prefix, volume type label (inference, mask, ann etc.), ending with .nii or .nii.gz.

For instance segmentation:

  • The anatomical volume file must include the plane identifier (axl, cor, or sag), anatomic type label and end with .nii or .nii.gz.

  • Each annotation file must also include the plane identifier and type label (inference, mask, ann etc.), ending with .nii or .nii.gz.

  • Multiple annotation files per plane are supported, each representing a separate class (and may contain multiple objects).

Note: Filenames can include other descriptive parts such as patient or case UIDs, body parts, arbituary strings or other identifiers, as long as the required plane and type identifiers are present and the file extension is .nii or .nii.gz.

The plane identifier must be one of: cor, sag, or axl. The converter uses these prefixes to group volumes and their annotation files, requiring exactly three volumes — one for each prefix per folder.

Structure example for semantic segmentation:

📂 dataset_name # ⬅︎ may be archive, root files or nested directory instead
├──📄 cls_color_map.txt  # ⬅︎ optional file
├──🩻 axl_anatomic.nii
├──🩻 axl_inference.nii
├──🩻 cor_anatomic.nii
├──🩻 cor_inference.nii
├──🩻 sag_anatomic.nii
└──🩻 sag_inference.nii

Structure example for instance segmentation:

📂 dataset_name # ⬅︎ may be archive, root files or nested directory instead
├──📄 cls_color_map.txt  # ⬅︎ optional file
├──🩻 axl_anatomic.nii
├──🩻 axl_inference_1.nii
├──🩻 axl_inference_2.nii
├──🩻 cor_anatomic.nii
├──🩻 cor_inference_1.nii
├──🩻 cor_inference_2.nii
├──🩻 cor_inference_3.nii
├──🩻 sag_anatomic.nii
└──🩻 sag_inference_1.nii

Example 3: grouped by plane w/ multiple items

If you need to import multiple items at once, place each item in a separate folder. The converter supports any folder structure. Folders may be at different levels, and files will be matched by directory (annotation files must be in the same folder as their corresponding volume). All files will be imported into the same dataset.

Structure example for multiple items directory:

📂 dataset_name # ⬅︎ may be archive, root files or nested directory instead
├──📄 cls_color_map.txt  # ⬅︎ optional file
├──📂 item_1
│  ├──🩻 axl_anatomic.nii
│  ├──🩻 axl_inference_1.nii
│  ├──🩻 axl_inference_2.nii
│  ├──🩻 cor_anatomic.nii
│  ├──🩻 cor_inference_1.nii
│  ├──🩻 cor_inference_3.nii
│  └──🩻 sag_anatomic.nii
├──📂 item_2
│  ├──🩻 axl_anatomic.nii
│  ├──🩻 axl_inference_1.nii
│  ├──🩻 axl_inference_2.nii
│  ├──🩻 cor_anatomic.nii
│  ├──🩻 cor_inference_1.nii
│  ├──🩻 cor_inference_3.nii
│  └──🩻 sag_anatomic.nii
├──📂 item_2
│  ├──🩻 axl_anatomic.nii
│  ├──🩻 axl_inference_1.nii
│  ├──🩻 axl_inference_2.nii
│  ├──🩻 cor_anatomic.nii
│  ├──🩻 cor_inference_1.nii
│  ├──🩻 cor_inference_3.nii
│  ├──🩻 sag_anatomic.nii
└──└──🩻 sag_inference_1.nii

Class color map file (optional)

The converter will look for an optional TXT file in the input directory. If present, it will be used to create the classes with names and colors corresponding to the pixel values in the NIfTI files.

The TXT file should be structured as follows:

1 Femur 255 0 0
2 Femoral cartilage 0 255 0
3 Tibia 0 0 255
4 Tibia cartilage 255 255 0
5 Patella 0 255 255
6 Patellar cartilage 255 0 255
7 Miniscus 175 175 175

where:

  • 1, 2, ... are the pixel values in the NIfTI files

  • Femur, Femoral cartilage, ... are the names of the classes

  • 255, 0, 0, ... are the RGB colors of the classes

Upload annotations separately

Plane-structured converter supports uploading annotations separately (uploading annotations to existing volumes). This functionality supports both dataset-scope and project-wide annotation imports.

By default, annotations are matched with their corresponding volumes based on filenames. However, a custom mapping can be provided via a .json file to explicitly define the mapping.

Input structure example for dataset scope:

🩻 axl_inference_1.nii
🩻 axl_inference_2.nii
🩻 cor_inference_1.nii
🩻 cor_inference_3.nii
📄 color_map.txt # ⬅︎ optional file
📄 mapping.json # ⬅︎ optional file

Input structure example for project-wide import:

📄 mapping.json # ⬅︎ optional file
📄 cls_color_map.txt  # ⬅︎ optional file
📂 dataset_name_1
├──🩻 axl_inference_1.nii
├──🩻 axl_inference_2.nii
└──🩻 cor_inference_3.nii
📂 dataset_name_2
├──🩻 axl_inference_1.nii
├──🩻 axl_inference_2.nii
└──🩻 cor_inference_3.nii
📂 dataset_name_3
├──🩻 axl_inference_1.nii
├──🩻 axl_inference_2.nii
└──🩻 cor_inference_3.nii

JSON mapping

Mapping structure should be as follows:

{
    "cor_inference_1.nii": 123,
    "sag_mask_2.nii": 456
}

Where key should be annotation filename, and volume ID as value

If you want to import annotations for the entire project via a JSON mapping:

  1. Pack annotations inside folders with corresponding dataset name as in an example above

  2. Specify the dataset name in a .json file in a path-like manner (dataset_name/annotation_filename)

Example JSON structure with dataset specification:

{
    "dataset1/cor_inference_1.nii": 123,
    "dataset2/sag_mask_2.nii": 456
}

Useful links

  • [Supervisely Ecosystem] Export volume project to cloud storage

Previous.NRRD, .DCM volumesNextImport sample dataset

Last updated 14 days ago

Was this helpful?

🔁