File size: 7,935 Bytes
0e0f1a4 d80f5dc 0e0f1a4 0f1c603 0e0f1a4 6f88df2 0e0f1a4 0f1c603 0e0f1a4 0f1c603 0e0f1a4 0f1c603 0e0f1a4 0f1c603 0e0f1a4 0f1c603 0e0f1a4 f2b5ba9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
---
license: cc-by-4.0
task_categories:
- video-tracking
tags:
- video-object-segmentation
- single-object-tracking
- point-tracking
- computer-vision
- benchmark
language:
- en
pretty_name: TAG
arxiv: 2510.18822
configs:
- config_name: default
data_files: "*.json"
sep: "\t"
---
# SAM 2++: Tracking Anything at Any Granularity
π₯ [Evaluation Server](TODO) | π [Homepage](https://tracking-any-granularity.github.io/) | π [Paper](https://arxiv.org/abs/2510.18822) | π [GitHub](https://github.com/MCG-NJU/SAM2-Plus)
## Download
We recommend using `huggingface-cli` to download:
```
pip install -U "huggingface_hub[cli]"
huggingface-cli download MCG-NJU/Tracking-Any-Granularity --repo-type dataset --local-dir ./Tracking-Any-Granularity --local-dir-use-symlinks False --max-workers 16
```
## π₯ Latest News
- **[2024-10-27]** To provide a benchmark for the task of language reference, such as 'Tracking by natural language specification' and 'Referring video object segmentation', we added the language description of the object in meta json.
- **[2024-10-24]** [SAM 2++ model](https://github.com/MCG-NJU/SAM2-Plus) and part of [Tracking-Any-Granularity dataset](https://huggingface.co/datasets/MCG-NJU/tracking-any-granularity) are released. Check out the [project page](https://tracking-any-granularity.github.io/) for more details.
## Dataset Summary
**T**racking-**A**ny-**G**ranularity (TAG) is a comprehensive dataset for training our unified model, termed Tracking-Any-Granularity (TAG), with annotations across three granularities: segmentation masks, bounding boxes, and key points.
<table align="center">
<tbody>
<tr>
<td><img width="220" src="assets/data/00025.gif"/></td>
<td><img width="220" src="assets/data/00076.gif"/></td>
<td><img width="220" src="assets/data/00045.gif"/></td>
</tr>
</tbody>
</table>
<table align="center">
<tbody>
<tr>
<td><img width="220" src="assets/data/00102.gif"/></td>
<td><img width="220" src="assets/data/00103.gif"/></td>
<td><img width="220" src="assets/data/00152.gif"/></td>
</tr>
</tbody>
</table>
<table align="center">
<tbody>
<tr>
<td><img width="220" src="assets/data/00227.gif"/></td>
<td><img width="220" src="assets/data/00117.gif"/></td>
<td><img width="220" src="assets/data/00312.gif"/></td>
</tr>
</tbody>
</table>
## Dataset Description
Our dataset includes **a wide range of video sources**, demonstrating strong diversity and serving as a solid benchmark for evaluating tracking performance. Each video sequence is annotated with **18 attributes representing different tracking challenges**, which can appear simultaneously in the same video. Common challenges include motion blur, deformation, and partial occlusion, reflecting the datasetβs high difficulty. Most videos contain multiple attributes, indicating the datasetβs coverage of complex and diverse tracking scenarios.

## Benchmark Results
We evaluated many representative trackers on the valid and test splits of our dataset:
*video object segmentation*
| Model | π₯ & β± | π₯ | β± | π₯ & β± | π₯ | β± |
|-------------------------------|---------|---------|---------|---------|---------|---------|
| STCN | 70.4 | 65.9 | 75 | 76.2 | 72.2 | 80.2 |
| AOT-SwinB | 78.1 | 73.1 | 83.2 | 80.9 | 76.4 | 85.4 |
| DeAOT-SwinB | 79.6 | 74.8 | 84.4 | 81.6 | 77.3 | 85.9 |
| XMem | 74.4 | 70.1 | 78.6 | 75.7 | 71.8 | 79.6 |
| DEVA | 77.9 | 73.1 | 82.6 | 82.1 | 78.0 | 86.1 |
| Cutie-base+ | 79.0 | 75.0 | 83.0 | 83.8 | 80.0 | 87.7 |
| Cutie-base+ w/MEGA | 80.3 | 76.5 | 84.2 | 84.9 | 81.3 | 88.5 |
| OneVOS | 80.1 | 75.2 | 85.1 | 81 | 76.5 | 85.4 |
| OneVOS w/MOSE | 79.3 | 74.3 | 84.3 | 82.4 | 78 | 86.7 |
| JointFormer | 76.6 | 72.8 | 80.5 | 79.1 | 75.5 | 82.7 |
| SAM2++ | 87.4 | 84.2 | 90.7 | 87.9 | 84.9 | 90.9 |
*single object tracking*
| Model | AUC | P_Norm | P | AUC | P_Norm | P |
|------------------------------|---------|---------|---------|---------|---------|---------|
| OSTrack | 74.8 | 84.4 | 72.7 | 69.7 | 78.8 | 69.9 |
| SimTrack | 71.1 | 80.5 | 68.1 | 64.1 | 72.4 | 60.5 |
| MixViT w/ConvMAE | 72.1 | 80.9 | 70.5 | 69.7 | 78.2 | 70.2 |
| DropTrack | 76.8 | 86.9 | 74.4 | 71.1 | 80.5 | 72.1 |
| GRM | 73.1 | 82.3 | 71.4 | 69.1 | 77.4 | 69.1 |
| SeqTrack | 77.0 | 85.8 | 76.1 | 69.8 | 79.4 | 71.5 |
| ARTrack | 76.8 | 85.8 | 75.7 | 71.1 | 78.7 | 70.9 |
| ARTrack-V2 | 76.3 | 85.5 | 74.3 | 71.8 | 79.5 | 71.9 |
| ROMTrack | 75.6 | 85.4 | 73.7 | 71.3 | 80.8 | 72.8 |
| HIPTrack | 78.2 | 88.5 | 76.6 | 71.4 | 81 | 72.5 |
| LoRAT | 75.1 | 84.8 | 74.4 | 70.5 | 79.7 | 68.7 |
| SAM2++ | 80.7 | 89.7 | 77.8 | 78 | 85.7 | 81.5 |
*point tracking*
| Model | Acc | Acc |
|------------|---------|---------|
| pips | 19.0 | 19.8 |
| pips++ | 20.9 | 23.1 |
| CoTracker | 23.3 | 22.3 |
| CoTracker3 | 29.6 | 29.1 |
| TAPTR | 23.7 | 23.8 |
| TAPIR | 21.3 | 24.6 |
| LocoTrack | 25.2 | 30.2 |
| Track-On | 24.8 | 25.8 |
| SAM2++ | 35.3 | 37.7 |
## Dataset Structure
```
<ImageSets>
β
βββ valid.txt
βββ test.txt
<valid/test.tar.gz>
β
βββ Annotations
β β
β βββ <video_name_1>
β β βββ 00000.png
β β βββ 00001.png
β β βββ ...
β β
β βββ <video_name_2>
β β βββ 00000.png
β β βββ 00001.png
β β βββ ...
β β
β βββ <video_name_...>
β
βββ Points
β β
β βββ <video_name_1>.npz
β βββ <video_name_2>.npz
β βββ <video_name_...>.npz
β
βββ Boxes
β β
β βββ <video_name_1>.txt
β βββ <video_name_2>.txt
β βββ <video_name_...>.txt
β
βββ Visible
β β
β βββ <video_name_1>.txt
β βββ <video_name_2>.txt
β βββ <video_name_...>.txt
β
βββ JPEGImages
β
βββ <video_name_1>
β βββ 00000.jpg
β βββ 00001.jpg
β βββ ...
β
βββ <video_name_2>
β βββ 00000.jpg
β βββ 00001.jpg
β βββ ...
β
βββ <video_name_...>
```
## BibTeX
If you find Tracking-Any-Granularity helpful to your research, please consider citing our papers.
```
@article{zhang2025sam2trackinggranularity,
title={SAM 2++: Tracking Anything at Any Granularity},
author={Jiaming Zhang and Cheng Liang and Yichun Yang and Chenkai Zeng and Yutao Cui and Xinwen Zhang and Xin Zhou and Kai Ma and Gangshan Wu and Limin Wang},
journal={arXiv preprint arXiv:2510.18822},
url={https://arxiv.org/abs/2510.18822},
year={2025}
}
```
## License
Tracking-Any-Granularity dataset is licensed under a [Creative Commons license (CC-BY) 4.0 License](https://creativecommons.org/licenses). The data of Tracking-Any-Granularity is released for non-commercial research purpose only. |