Spaces:
Running
on
Zero
Hawk: Learning to Understand Open-World Video Anomalies
This is the official repository for Hawk.
Jiaqi Tang^, Hao Lu^, Ruizheng Wu, Xiaogang Xu, Ke Ma, Cheng Fang,
Bin Guo, Jiangbo Lu, Qifeng Chen and Ying-Cong Chen*
^: Equal contribution. *: Corresponding Author.
π Motivation - Have eyes like a Hawk!
π© Current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction.
π© Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios.
π’ Updates
- β Feb 24, 2025 - We release the training and demo code of Hawk.
- β Feb 24, 2025 - We release the dataset (video + annotation) of Hawk. Check this Huggingface link for DOWNLOAD.
- β Step 26, 2024 - Hawk is accepted by NeurIPS 2024.
- β June 29, 2024 - We release the dataset (annotation) of Hawk. Check this Google Cloud link for DOWNLOAD.
βΆοΈ Getting Started
πͺ Installation
- Create environment by following steps:
apt install ffmpeg conda env create -f environment.yml conda activate hawk
π° Pretrained and Fine-tuned Model
The following checkpoints are utilized to run HawkοΌ
Checkpoint Link Note Video-LLaMA-2-7B-Finetuned link Used as initial weights for training. Hawk_Pretrained link Pretrained on the WebViD Hawk_Finetuned link Fine-tuned on Hawk dataset If you want to use the pretrained model, please use the Hawk_Pretrained checkpoint.
If you wish to leverage the model for our anomaly understanding, please opt for the Hawk_Finetuned checkpoint.
β³ Domo
The configuration files for
demo.Replace the following part as your own path:
# Use LLaMA-2-chat as base modal # Some ckpts could be download from Video_LLaMA-2-7B-Finetuned # https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-7B-Finetuned llama_model: ".../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf" # Hawk Weight (Pretrained or Finetuned) ckpt: '.../checkpoint.pth'Then, run the script:
python app.py \ --cfg-path configs/eval_configs/eval.yaml \ --model_type llama_v2 \ --gpu-id 0GUI
π₯οΈ Training
πΎ Dataset Preparation
For your convenience, we now provide the video and annotations for the Hawk dataset. You can download them using the Hugglingface: DOWNLOAD.
Traditional Data Acquisition Method:
- DOWNLOAD all video datasets for their original dources.
Google Drive Link to DOWNLOAD our annotations.
Data Structure: each forder contains one annotation file (e.g. CUHK Avenue, DoTA, etc.). The
All_Mixdirectory contains all of datasets in training and testing.The dataset is organized as follows:
(Hawk_data) Annotation βββ All_Mix β βββ all_videos_all.json β βββ all_videos_test.json β βββ all_videos_train.json β βββ CUHK_Avenue β βββ Avenue.json βββ DoTA β βββ DoTA.json βββ Ped1 β βββ ... βββ ... βββ UCF_Crime β βββ ... β Videos βββ CUHK_Avenue β βββ Avenue.json βββ DoTA β βββ DoTA.json βββ Ped1 β βββ ... βββ ... β readmeNoteοΌthe data path should be redefined.
π¨ Configuration
The configuration files for
trainingincluding two stages.Replace the following part as your own path:
llama_model: ".../Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf" # The ckpt of vision branch after stage1 pretrained, (only for stage 2) ckpt: ".../checkpoint.pth"
π₯οΈ To Train
Then, run the script:
# for pretraining NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port='10000' train.py --cfg-path ./configs/train_configs/stage1_pretrain.yaml # for fine-tuning NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port='12001' train.py --cfg-path ./configs/train_configs/stage2_finetune.yamlResource Usage: Training (stage 1 and stage 2): 4 * RTX A6000 48G
π Citations
The following is a BibTeX reference:
@inproceedings{atang2024hawk,
title = {Hawk: Learning to Understand Open-World Video Anomalies},
author = {Tang, Jiaqi and Lu, Hao and Wu, Ruizheng and Xu, Xiaogang and Ma, Ke and Fang, Cheng and Guo, Bin and Lu, Jiangbo and Chen, Qifeng and Chen, Ying-Cong},
year = {2024},
booktitle = {Neural Information Processing Systems (NeurIPS)}
}
π§ Connecting with Us?
If you have any questions, please feel free to send email to [email protected].
π Acknowledgment
This work is supported by the National Natural Science Foundation of China (No. 62206068) and the Natural Science Foundation of Zhejiang Province, China under No. LD24F020002.
Also, this project is inspired by Video-LLaMA.