--- license: cc-by-4.0 pipeline_tag: image-to-text library_name: transformers tags: - remote-sensing - change-detection - image-captioning --- # RSCCM: Remote Sensing Change Captioning Model This model (`RSCCM`) is presented in the paper [RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events](https://huggingface.co/papers/2509.01907). - 📄 [Paper](https://huggingface.co/papers/2509.01907) - 🌐 [Project Page](https://bili-sakura.github.io/RSCC/) - 💻 [Code on GitHub](https://github.com/Bili-Sakura/RSCC) ## Overview RSCCM is a supervised full-tuning version of [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) that specializes for remote sensing change captioning, which is trained on [RSCC](https://huggingface.co/datasets/BiliSakura/RSCC) dataset. The training details are shown in our [paper](https://huggingface.co/papers/2509.01907). ## Installation Follow Qwen2.5-VL official huggingface repo (see [here](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)). ```bash pip install transformers accelerate # the latest stable version already integrate Qwen2.5-VL pip install qwen-vl-utils[decord]==0.0.8 ``` ## Inference For more implement details, refer to Qwen-VL official GitHub repo (see [here](https://github.com/QwenLM/Qwen2.5-VL)). 1. Load model (the same as Qwen2.5-VL) ```python from transformers import ( Qwen2_5_VLForConditionalGeneration, AutoProcessor ) import torch model_id = "BiliSakura/RSCCM" model_path = model_id # download from huggingface.co automatically or you can specify as path/to/your/model/folder model = Qwen2_5_VLForConditionalGeneration.from_pretrained( model_path, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ).to("cuda") processor = AutoProcessor.from_pretrained(model_path) ``` 2. Get image pairs ```python from PIL import Image pre_img_path = "path/to/pre/event/image" post_img_path = "path/to/post/event/image" text_prompt =""" Give change description between two satellite images. Output answer in a news style with a few sentences using precise phrases separated by commas. """ pre_image = Image.open(pre_img_path) post_image = Image.open(post_img_path) ``` 3. Inference ```python from qwen_vl_utils import process_vision_info import torch messages = [ { "role": "user", "content": [ {"type": "image", "image": pre_image}, {"type": "image", "image": post_image}, { "type": "text", "text": text_prompt, }, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, _ = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, padding=True, return_tensors="pt" ).to("cuda", torch.bfloat16) # Generate captions for the input image pair generated_ids = model.generate( **inputs, max_new_tokens=512, # temperature=TEMPERATURE ) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] captions = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False, ) change_caption = captions[0] ``` ## 📜 Citation If you find this repository helpful, feel free to cite our paper: ```bibtex @misc{chen2025rscclargescaleremotesensing, title={RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events}, author={Zhenyuan Chen and Chenxi Wang and Ningyu Zhang and Feng Zhang}, year={2025}, eprint={2509.01907}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2509.01907}, } @article{qwen2.5vl, title={Qwen2.5-VL Technical Report}, url={http://arxiv.org/abs/2502.13923}, DOI={10.48550/arXiv.2502.13923}, author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang}, year={2025}, month=feb } ``` ## Licensing Information The dataset is released under the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/deed.en), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## 🙏 Acknowledgement Our RSCC dataset is built based on [xBD](https://www.xview2.org/) and [EBD](https://figshare.com/articles/figure/An_Extended_Building_Damage_EBD_dataset_constructed_from_disaster-related_bi-temporal_remote_sensing_images_/25285009) datasets. We are thankful to [Kimi-VL](https://hf-mirror.com/moonshotai/Kimi-VL-A3B-Instruct), [BLIP-3](https://hf-mirror.com/Salesforce/xgen-mm-phi3-mini-instruct-interleave-r-v1.5), [Phi-4-Multimodal](https://hf-mirror.com/microsoft/Phi-4-multimodal-instruct), [Qwen2-VL](https://hf-mirror.com/Qwen/Qwen2-VL-7B-Instruct), [Qwen2.5-VL](https://hf-mirror.com/Qwen/Qwen2.5-VL-72B-Instruct), [LLaVA-NeXT-Interleave](https://hf-mirror.com/llava-hf/llava-interleave-qwen-7b-hf),[LLaVA-OneVision](https://hf-mirror.com/llava-hf/llava-onevision-qwen2-7b-ov-hf), [InternVL 3](https://hf-mirror.com/OpenGVLab/InternVL3-8B), [Pixtral](https://hf-mirror.com/mistralai/Pixtral-12B-2409), [TEOChat](https://github.com/ermongroup/TEOChat) and [CCExpert](https://github.com/Meize0729/CCExpert) for releasing their models and code as open-source contributions. The metrics implements are derived from [huggingface/evaluate](https://github.com/huggingface/evaluate). The training implements are derived from [QwenLM/Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL).