Supervisely
AboutAPI ReferenceSDK Reference
  • 🤖What's Supervisely
  • 🚀Ecosystem of Supervisely Apps
  • 💡FAQ
  • 📌Getting started
    • How to import
    • How to annotate
    • How to invite team members
    • How to connect agents
    • How to train models
  • 🔁Import and Export
    • Import
      • Overview
      • Import using Web UI
      • Supported annotation formats
        • Images
          • 🤖Supervisely JSON
          • 🤖Supervisely Blob
          • COCO
          • Yolo
          • Pascal VOC
          • Cityscapes
          • Images with PNG masks
          • Links from CSV, TXT and TSV
          • PDF files to images
          • Multiview images
          • Multispectral images
          • Medical 2D images
          • LabelMe
          • LabelStudio
          • Fisheye
          • High Color Depth
        • Videos
          • Supervisely
        • Pointclouds
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 3D
        • Pointcloud Episodes
          • Supervisely
          • .PCD, .PLY, .LAS, .LAZ pointclouds
          • Lyft
          • nuScenes
          • KITTI 360
        • Volumes
          • Supervisely
          • .NRRD, .DCM volumes
          • NIfTI
      • Import sample dataset
      • Import into an existing dataset
      • Import using Team Files
      • Import from Cloud
      • Import using API & SDK
      • Import using agent
    • Migrations
      • Roboflow to Supervisely
      • Labelbox to Supervisely
      • V7 to Supervisely
      • CVAT to Supervisely
    • Export
  • 📂Data Organization
    • Core concepts
    • MLOps Workflow
    • Projects
      • Datasets
      • Definitions
      • Collections
    • Team Files
    • Disk usage & Cleanup
    • Quality Assurance & Statistics
      • Practical applications of statistics
    • Operations with Data
      • Data Filtration
        • How to use advanced filters
      • Pipelines
      • Augmentations
      • Splitting data
      • Converting data
        • Convert to COCO
        • Convert to YOLO
        • Convert to Pascal VOC
    • Data Commander
      • Clone Project Meta
  • 📝Labeling
    • Labeling Toolboxes
      • Images
      • Videos 2.0
      • Videos 3.0
      • 3D Point Clouds
      • DICOM
      • Multiview images
      • Fisheye
    • Labeling Tools
      • Navigation & Selection Tools
      • Point Tool
      • Bounding Box (Rectangle) Tool
      • Polyline Tool
      • Polygon Tool
      • Brush Tool
      • Mask Pen Tool
      • Smart Tool
      • Graph (Keypoints) Tool
      • Frame-based tagging
    • Labeling Jobs
      • Labeling Queues
      • Labeling Consensus
      • Labeling Statistics
    • Labeling with AI-Assistance
  • 🤝Collaboration
    • Admin panel
      • Users management
      • Teams management
      • Server disk usage
      • Server trash bin
      • Server cleanup
      • Server stats and errors
    • Teams & workspaces
    • Members
    • Issues
    • Guides & exams
    • Activity log
    • Sharing
  • 🖥️Agents
    • Installation
      • Linux
      • Windows
      • AMI AWS
      • Kubernetes
    • How agents work
    • Restart and delete agents
    • Status and monitoring
    • Storage and cleanup
    • Integration with Docker
  • 🔮Neural Networks
    • Overview
    • Inference & Deployment
      • Overview
      • Supervisely Serving Apps
      • Deploy & Predict with Supervisely SDK
      • Using trained models outside of Supervisely
    • Model Evaluation Benchmark
      • Object Detection
      • Instance Segmentation
      • Semantic Segmentation
      • Custom Benchmark Integration
    • Custom Model Integration
      • Overview
      • Custom Inference
      • Custom Training
    • Legacy
      • Starting with Neural Networks
      • Train custom Neural Networks
      • Run pre-trained models
  • 👔Enterprise Edition
    • Get Supervisely
      • Installation
      • Post-installation
      • Upgrade
      • License Update
    • Kubernetes
      • Overview
      • Installation
      • Connect cluster
    • Advanced Tuning
      • HTTPS
      • Remote Storage
      • Single Sign-On (SSO)
      • CDN
      • Notifications
      • Moving Instance
      • Generating Troubleshoot Archive
      • Storage Cleanup
      • Private Apps
      • Data Folder
      • Firewall
      • HTTP Proxy
      • Offline usage
      • Multi-disk usage
      • Managed Postgres
      • Scalability Tuning
  • 🔧Customization and Integration
    • Supervisely .JSON Format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Tags
      • Objects
      • Single-Image Annotation
      • Single-Video Annotation
      • Point Cloud Episodes
      • Volumes Annotation
    • Developer Portal
    • SDK
    • API
  • 💡Resources
    • Changelog
    • GitHub
    • Blog
    • Ecosystem
Powered by GitBook
On this page
  • What is Graph Tool?
  • Video Tutorial
  • How to Use the Graph (Keypoints) Tool
  • Create a Keypoint Class
  • Manually apply the keypoint template
  • AI-assisted annotation with VitPose
  • Autolabeling Pipeline: Detection using YOLOv8 + Pose Estimation with ViTPose
  • Hotkeys

Was this helpful?

  1. 📝Labeling
  2. Labeling Tools

Graph (Keypoints) Tool

PreviousSmart ToolNextFrame-based tagging

Last updated 7 months ago

Was this helpful?

What is Graph Tool?

The Graph (Keypoints) Tool is designed for annotating key points on images and videos, providing an accurate way to analyze poses of objects such as humans and animals. This tool is particularly useful for tasks related to pose estimation, including movement tracking, gesture analysis, and behavior understanding.

Video Tutorial

To better understand how to use the Graph (Keypoints) Tool for pose estimation, watch our video tutorial. For animal pose estimation, you can also check out the video: "ViTPose — How to Use the Best Pose Estimation Model on Animals | Computer Vision Tutorial". These tutorials provide straightforward instructions on using the Keypoints Tool for both human and animal pose estimation.

How to Use the Graph (Keypoints) Tool

Follow this step-by-step guide to learn how to use the Graph (Keypoints) Tool effectively for pose annotation:

Create a Keypoint Class

  1. On the Project page, navigate to the Definitions section.

  2. Click + New Class and enter a name, select Keypoints shape and choose color.

  3. Define the key points and connections:

    • Add key points that represent the object's pose (e.g., head, torso, limbs for a human skeleton).

    • Use the Add Node button to place keypoints on an example image, and connect them with the Add Edge button to create a complete skeleton structure.

    • Assign descriptive names to each point or edge and click Save to finalize the class template.

  4. Your defined keypoint class will be saved and ready to use in annotation tasks.


Manually apply the keypoint template

  1. Open an image or video in the Annotation Toolbox.

  2. Select the pre-defined keypoint template created in the previous step.

  3. Place the points on the image according to the object's pose. To do this, click once on the edges of the object in the image and map them to the keypoints in the template.

  4. Correct the existing points using Drag a point to move, Disable/Enable point and Remove point buttons so that they repeat the subject pose.


AI-assisted annotation with VitPose

The ViTPose model provides AI-powered assistance for keypoint annotation, enabling automatic pose estimation for animals and humans.

Run Serve ViTPose app.

  1. Go to Ecosystem page and find the app Serve ViTPose. Also, you can run it from the Neural Networks page from the category Images → Pose Estimation (Keypoints) → Serve.

  2. Select the ViTPose+ pre-trained model for animal pose estimation and the type of animal you're interested in, press Serve button and wait for the model to deploy.

If you select a model for animal pose estimation, you will also see a list of supported animal species and basic information about the pitfalls of animal pose estimation.

Apply ViTPose to images in the labeling tool.

  1. Run NN Image Labeling app, connect to ViTPose, create a bounding box with the animal's type name, and click on Apply the model to ROI.

For the animal pose estimation task, you need to create a bounding box with the class name, which is presented in the list of supported animal species; keypoints skeleton class with the name {yourclassname}keypoints will be created. Otherwise, the keypoints skeleton class with the name animalkeypoints will be created.

  1. Correct keypoints. If there is only a part of the target object on the image, then you can increase the point threshold in app settings to get rid of unnecessary points. Correct the existing points using Drag a point to move, Disable/Enable point, Remove point buttons so that they repeat the animal's pose.

Pro Tip: Increase the point threshold in app settings to avoid unnecessary key points if only part of the object is visible.


Autolabeling Pipeline: Detection using YOLOv8 + Pose Estimation with ViTPose

An efficient way to automate pose annotation is by combining YOLOv8 for object detection with ViTPose for pose estimation. This combination significantly reduces manual work and speeds up the annotation process. Steps to use the autolabeling pipeline:

Run the YOLOv8 app for detection.

  1. Go to the Ecosystem page, find and run the Serve YOLOv8 app under Images → Object Detection → Serve.

  2. Deploy the app as a REST API service and select a pre-trained model based on your task requirements.

Apply detection and pose estimation models to images project:

  1. Use the Apply Detection and Pose Estimation Models to Images Project app found in the Ecosystem or Images → Pose Estimation (Keypoints) → Inference interfaces.

  2. Configure the settings, and this app will automatically apply the detection and pose estimation models to your images.

Outcome: This combination allows you to quickly identify objects and automatically add accurate pose keypoints, even for complex or blurry objects.

  1. After applying AI-assisted annotation, you may need to fine-tune keypoints manually:

  • Drag Points to adjust their position.

  • Enable/Disable Points to toggle visibility as needed.

  • Remove Points if the AI has incorrectly placed them.


Hotkeys

Graph (Keypoints) Tool

Place predefined template

Left Mouse Click

Toggle keypoint visibility

Command (⌘) + Left Mouse Click

Remove keypoint

Shift + Left Mouse Click

Scene Navigation

Zoom with Mouse wheel. Hold to move scene.