Spaces:
Running
on
Zero
Running
on
Zero
File size: 7,258 Bytes
ef0f225 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
<div align="center">
# Hawk: Learning to Understand Open-World Video Anomalies
<div align="center">
### This is the official repository for [Hawk](https://arxiv.org/pdf/2405.16886).
[Jiaqi Tang^](https://jqt.me/), [Hao Lu^](https://scholar.google.com/citations?user=OOagpAcAAAAJ&hl=en), [Ruizheng Wu](https://scholar.google.com/citations?user=OOagpAcAAAAJ&hl=en), [Xiaogang Xu](https://xuxiaogang.com/), [Ke Ma](https://scholar.google.com.hk/citations?user=yXGNGS8AAAAJ&hl=en), [Cheng Fang](),
\
[Bin Guo](http://www.guob.org/), [Jiangbo Lu](https://sites.google.com/site/jiangbolu), [Qifeng Chen](https://cqf.io/) and [Ying-Cong Chen*](https://www.yingcong.me/)
^: Equal contribution.
*: Corresponding Author.
[](https://code.visualstudio.com/) [](https://badges.strrl.dev)
<img src="figs/icon.png" alt="Have eyes like a HAWK!" width="80">
</div>
</div>
## π **Motivation** - Have eyes like a Hawk!
- π© Current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction.
- π© Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios.
<div align="center">
<img src="figs/motivation1.png" alt="Hawk">
</div>
## π’ **Updates**
- β
Feb 24, 2025 - We release the **training and demo code** of **Hawk**.
- β
Feb 24, 2025 - We release the **dataset (video + annotation)** of **Hawk**. Check this Huggingface link for [DOWNLOAD](https://huggingface.co/datasets/Jiaqi-hkust/hawk).
- β
Step 26, 2024 - **Hawk** is accepted by NeurIPS 2024.
- β
June 29, 2024 - We release the **dataset (annotation)** of Hawk. Check this Google Cloud link for [DOWNLOAD](https://drive.google.com/file/d/1WCnizldWZvtS4Yg5SX7ay5C3kUQfz-Eg/view?usp=sharing).
## βΆοΈ **Getting Started**
### πͺ *Installation*
- Create environment by following steps:
```
apt install ffmpeg
conda env create -f environment.yml
conda activate hawk
```
### π° *Pretrained and Fine-tuned Model*
- The following checkpoints are utilized to run HawkοΌ
| Checkpoint | Link | Note |
|:------------------|-------------|-------------|
| Video-LLaMA-2-7B-Finetuned | [link](https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-7B-Finetuned/tree/main) | Used as initial weights for training.|
| **Hawk_Pretrained** | [link](https://huggingface.co/Jiaqi-hkust/hawk) | Pretrained on the [WebViD](https://github.com/m-bain/webvid)|
| **Hawk_Finetuned** | [link](https://huggingface.co/Jiaqi-hkust/hawk) | Fine-tuned on [Hawk dataset](https://huggingface.co/datasets/Jiaqi-hkust/hawk)|
- If you want to use the pretrained model, please use the **Hawk_Pretrained** checkpoint.
- If you wish to leverage the model for our anomaly understanding, please opt for the **Hawk_Finetuned** checkpoint.
## β³ **Domo**
- The configuration files for [`demo`](/configs/eval_configs/eval.yaml).
- Replace the following part as your own path:
```
# Use LLaMA-2-chat as base modal
# Some ckpts could be download from Video_LLaMA-2-7B-Finetuned
# https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-7B-Finetuned
llama_model: ".../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf"
# Hawk Weight (Pretrained or Finetuned)
ckpt: '.../checkpoint.pth'
```
- Then, run the script:
```
python app.py \
--cfg-path configs/eval_configs/eval.yaml \
--model_type llama_v2 \
--gpu-id 0
```
- GUI
<div align="center">
<img src="figs/demo.png" alt="Hawk">
</div>
## π₯οΈ **Training**
### πΎ *Dataset Preparation*
- **For your convenience, we now provide the video and annotations for the Hawk dataset. You can download them using the Hugglingface: [DOWNLOAD](https://huggingface.co/datasets/Jiaqi-hkust/hawk).**
- Traditional Data Acquisition Method:
- DOWNLOAD all video datasets for their original dources.
1. [CUHK_Avenue](https://www.cse.cuhk.edu.hk/leojia/projects/detectabnormal/dataset.html)
2. [DoTA](https://github.com/MoonBlvd/Detection-of-Traffic-Anomaly)
3. [Ped1](http://www.svcl.ucsd.edu/projects/anomaly/dataset.htm)
4. [Ped2](http://www.svcl.ucsd.edu/projects/anomaly/dataset.htm)
5. [ShanghaiTech](https://svip-lab.github.io/dataset/campus_dataset.html)
6. [UBNormal](https://github.com/lilygeorgescu/UBnormal/)
7. [UCF_Crime](https://www.crcv.ucf.edu/projects/real-world/)
- Google Drive Link to [DOWNLOAD](https://drive.google.com/file/d/1WCnizldWZvtS4Yg5SX7ay5C3kUQfz-Eg/view?usp=sharing) our annotations.
- Data Structure: each forder contains one annotation file (e.g. CUHK Avenue, DoTA, etc.). The `All_Mix` directory contains all of datasets in training and testing.
- The dataset is organized as follows:
```
(Hawk_data)
Annotation
βββ All_Mix
β βββ all_videos_all.json
β βββ all_videos_test.json
β βββ all_videos_train.json
β
βββ CUHK_Avenue
β βββ Avenue.json
βββ DoTA
β βββ DoTA.json
βββ Ped1
β βββ ...
βββ ...
βββ UCF_Crime
β βββ ...
β
Videos
βββ CUHK_Avenue
β βββ Avenue.json
βββ DoTA
β βββ DoTA.json
βββ Ped1
β βββ ...
βββ ...
β
readme
```
NoteοΌthe data path should be redefined.
### π¨ *Configuration*
- The configuration files for [`training`](/configs/train_configs) including two stages.
- Replace the following part as your own path:
```
llama_model: ".../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf"
# The ckpt of vision branch after stage1 pretrained, (only for stage 2)
ckpt: ".../checkpoint.pth"
```
### π₯οΈ *To Train*
- Then, run the script:
```
# for pretraining
NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port='10000' train.py --cfg-path ./configs/train_configs/stage1_pretrain.yaml
# for fine-tuning
NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port='12001' train.py --cfg-path ./configs/train_configs/stage2_finetune.yaml
```
*Resource Usage: Training (stage 1 and stage 2): 4 * RTX A6000 48G*
## π **Citations**
**The following is a BibTeX reference:**
``` latex
@inproceedings{atang2024hawk,
title = {Hawk: Learning to Understand Open-World Video Anomalies},
author = {Tang, Jiaqi and Lu, Hao and Wu, Ruizheng and Xu, Xiaogang and Ma, Ke and Fang, Cheng and Guo, Bin and Lu, Jiangbo and Chen, Qifeng and Chen, Ying-Cong},
year = {2024},
booktitle = {Neural Information Processing Systems (NeurIPS)}
}
```
## π§ **Connecting with Us?**
If you have any questions, please feel free to send email to `[email protected]`.
## π **Acknowledgment**
This work is supported by the National Natural Science Foundation of China (No. 62206068) and the Natural Science Foundation of Zhejiang Province, China under No. LD24F020002.
Also, this project is inspired by [Video-LLaMA](https://github.com/DAMO-NLP-SG/Video-LLaMA). |