jiamingZ's picture
Update README.md
f2b5ba9 verified
---
license: cc-by-4.0
task_categories:
- video-tracking
tags:
- video-object-segmentation
- single-object-tracking
- point-tracking
- computer-vision
- benchmark
language:
- en
pretty_name: TAG
arxiv: 2510.18822
configs:
- config_name: default
data_files: "*.json"
sep: "\t"
---
# SAM 2++: Tracking Anything at Any Granularity
πŸ”₯ [Evaluation Server](TODO) | 🏠 [Homepage](https://tracking-any-granularity.github.io/) | πŸ“„ [Paper](https://arxiv.org/abs/2510.18822) | πŸ”— [GitHub](https://github.com/MCG-NJU/SAM2-Plus)
## Download
We recommend using `huggingface-cli` to download:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download MCG-NJU/Tracking-Any-Granularity --repo-type dataset --local-dir ./Tracking-Any-Granularity --local-dir-use-symlinks False --max-workers 16
```
## πŸ”₯ Latest News
- **[2024-10-27]** To provide a benchmark for the task of language reference, such as 'Tracking by natural language specification' and 'Referring video object segmentation', we added the language description of the object in meta json.
- **[2024-10-24]** [SAM 2++ model](https://github.com/MCG-NJU/SAM2-Plus) and part of [Tracking-Any-Granularity dataset](https://huggingface.co/datasets/MCG-NJU/tracking-any-granularity) are released. Check out the [project page](https://tracking-any-granularity.github.io/) for more details.
## Dataset Summary
**T**racking-**A**ny-**G**ranularity (TAG) is a comprehensive dataset for training our unified model, termed Tracking-Any-Granularity (TAG), with annotations across three granularities: segmentation masks, bounding boxes, and key points.
<table align="center">
<tbody>
<tr>
<td><img width="220" src="assets/data/00025.gif"/></td>
<td><img width="220" src="assets/data/00076.gif"/></td>
<td><img width="220" src="assets/data/00045.gif"/></td>
</tr>
</tbody>
</table>
<table align="center">
<tbody>
<tr>
<td><img width="220" src="assets/data/00102.gif"/></td>
<td><img width="220" src="assets/data/00103.gif"/></td>
<td><img width="220" src="assets/data/00152.gif"/></td>
</tr>
</tbody>
</table>
<table align="center">
<tbody>
<tr>
<td><img width="220" src="assets/data/00227.gif"/></td>
<td><img width="220" src="assets/data/00117.gif"/></td>
<td><img width="220" src="assets/data/00312.gif"/></td>
</tr>
</tbody>
</table>
## Dataset Description
Our dataset includes **a wide range of video sources**, demonstrating strong diversity and serving as a solid benchmark for evaluating tracking performance. Each video sequence is annotated with **18 attributes representing different tracking challenges**, which can appear simultaneously in the same video. Common challenges include motion blur, deformation, and partial occlusion, reflecting the dataset’s high difficulty. Most videos contain multiple attributes, indicating the dataset’s coverage of complex and diverse tracking scenarios.
![TAG dataset](assets/4-attr.png?raw=true)
## Benchmark Results
We evaluated many representative trackers on the valid and test splits of our dataset:
*video object segmentation*
| Model | π’₯ & β„± | π’₯ | β„± | π’₯ & β„± | π’₯ | β„± |
|-------------------------------|---------|---------|---------|---------|---------|---------|
| STCN | 70.4 | 65.9 | 75 | 76.2 | 72.2 | 80.2 |
| AOT-SwinB | 78.1 | 73.1 | 83.2 | 80.9 | 76.4 | 85.4 |
| DeAOT-SwinB | 79.6 | 74.8 | 84.4 | 81.6 | 77.3 | 85.9 |
| XMem | 74.4 | 70.1 | 78.6 | 75.7 | 71.8 | 79.6 |
| DEVA | 77.9 | 73.1 | 82.6 | 82.1 | 78.0 | 86.1 |
| Cutie-base+ | 79.0 | 75.0 | 83.0 | 83.8 | 80.0 | 87.7 |
| Cutie-base+ w/MEGA | 80.3 | 76.5 | 84.2 | 84.9 | 81.3 | 88.5 |
| OneVOS | 80.1 | 75.2 | 85.1 | 81 | 76.5 | 85.4 |
| OneVOS w/MOSE | 79.3 | 74.3 | 84.3 | 82.4 | 78 | 86.7 |
| JointFormer | 76.6 | 72.8 | 80.5 | 79.1 | 75.5 | 82.7 |
| SAM2++ | 87.4 | 84.2 | 90.7 | 87.9 | 84.9 | 90.9 |
*single object tracking*
| Model | AUC | P_Norm | P | AUC | P_Norm | P |
|------------------------------|---------|---------|---------|---------|---------|---------|
| OSTrack | 74.8 | 84.4 | 72.7 | 69.7 | 78.8 | 69.9 |
| SimTrack | 71.1 | 80.5 | 68.1 | 64.1 | 72.4 | 60.5 |
| MixViT w/ConvMAE | 72.1 | 80.9 | 70.5 | 69.7 | 78.2 | 70.2 |
| DropTrack | 76.8 | 86.9 | 74.4 | 71.1 | 80.5 | 72.1 |
| GRM | 73.1 | 82.3 | 71.4 | 69.1 | 77.4 | 69.1 |
| SeqTrack | 77.0 | 85.8 | 76.1 | 69.8 | 79.4 | 71.5 |
| ARTrack | 76.8 | 85.8 | 75.7 | 71.1 | 78.7 | 70.9 |
| ARTrack-V2 | 76.3 | 85.5 | 74.3 | 71.8 | 79.5 | 71.9 |
| ROMTrack | 75.6 | 85.4 | 73.7 | 71.3 | 80.8 | 72.8 |
| HIPTrack | 78.2 | 88.5 | 76.6 | 71.4 | 81 | 72.5 |
| LoRAT | 75.1 | 84.8 | 74.4 | 70.5 | 79.7 | 68.7 |
| SAM2++ | 80.7 | 89.7 | 77.8 | 78 | 85.7 | 81.5 |
*point tracking*
| Model | Acc | Acc |
|------------|---------|---------|
| pips | 19.0 | 19.8 |
| pips++ | 20.9 | 23.1 |
| CoTracker | 23.3 | 22.3 |
| CoTracker3 | 29.6 | 29.1 |
| TAPTR | 23.7 | 23.8 |
| TAPIR | 21.3 | 24.6 |
| LocoTrack | 25.2 | 30.2 |
| Track-On | 24.8 | 25.8 |
| SAM2++ | 35.3 | 37.7 |
## Dataset Structure
```
<ImageSets>
β”‚
β”œβ”€β”€ valid.txt
β”œβ”€β”€ test.txt
<valid/test.tar.gz>
β”‚
β”œβ”€β”€ Annotations
β”‚ β”‚
β”‚ β”œβ”€β”€ <video_name_1>
β”‚ β”‚ β”œβ”€β”€ 00000.png
β”‚ β”‚ β”œβ”€β”€ 00001.png
β”‚ β”‚ └── ...
β”‚ β”‚
β”‚ β”œβ”€β”€ <video_name_2>
β”‚ β”‚ β”œβ”€β”€ 00000.png
β”‚ β”‚ β”œβ”€β”€ 00001.png
β”‚ β”‚ └── ...
β”‚ β”‚
β”‚ β”œβ”€β”€ <video_name_...>
β”‚
β”œβ”€β”€ Points
β”‚ β”‚
β”‚ β”œβ”€β”€ <video_name_1>.npz
β”‚ β”œβ”€β”€ <video_name_2>.npz
β”‚ β”œβ”€β”€ <video_name_...>.npz
β”‚
β”œβ”€β”€ Boxes
β”‚ β”‚
β”‚ β”œβ”€β”€ <video_name_1>.txt
β”‚ β”œβ”€β”€ <video_name_2>.txt
β”‚ β”œβ”€β”€ <video_name_...>.txt
β”‚
β”œβ”€β”€ Visible
β”‚ β”‚
β”‚ β”œβ”€β”€ <video_name_1>.txt
β”‚ β”œβ”€β”€ <video_name_2>.txt
β”‚ β”œβ”€β”€ <video_name_...>.txt
β”‚
└── JPEGImages
β”‚
β”œβ”€β”€ <video_name_1>
β”‚ β”œβ”€β”€ 00000.jpg
β”‚ β”œβ”€β”€ 00001.jpg
β”‚ └── ...
β”‚
β”œβ”€β”€ <video_name_2>
β”‚ β”œβ”€β”€ 00000.jpg
β”‚ β”œβ”€β”€ 00001.jpg
β”‚ └── ...
β”‚
└── <video_name_...>
```
## BibTeX
If you find Tracking-Any-Granularity helpful to your research, please consider citing our papers.
```
@article{zhang2025sam2trackinggranularity,
title={SAM 2++: Tracking Anything at Any Granularity},
author={Jiaming Zhang and Cheng Liang and Yichun Yang and Chenkai Zeng and Yutao Cui and Xinwen Zhang and Xin Zhou and Kai Ma and Gangshan Wu and Limin Wang},
journal={arXiv preprint arXiv:2510.18822},
url={https://arxiv.org/abs/2510.18822},
year={2025}
}
```
## License
Tracking-Any-Granularity dataset is licensed under a [Creative Commons license (CC-BY) 4.0 License](https://creativecommons.org/licenses). The data of Tracking-Any-Granularity is released for non-commercial research purpose only.