RSCCM: Remote Sensing Change Captioning Model
This model (RSCCM) is presented in the paper RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events.
- π Paper
- π Project Page
- π» Code on GitHub
Overview
RSCCM is a supervised full-tuning version of Qwen2.5-VL-7B-Instruct that specializes for remote sensing change captioning, which is trained on RSCC dataset. The training details are shown in our paper.
Installation
Follow Qwen2.5-VL official huggingface repo (see here).
pip install transformers accelerate # the latest stable version already integrate Qwen2.5-VL
pip install qwen-vl-utils[decord]==0.0.8
Inference
For more implement details, refer to Qwen-VL official GitHub repo (see here).
- Load model (the same as Qwen2.5-VL)
from transformers import (
Qwen2_5_VLForConditionalGeneration,
AutoProcessor
)
import torch
model_id = "BiliSakura/RSCCM"
model_path = model_id # download from huggingface.co automatically or you can specify as path/to/your/model/folder
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
).to("cuda")
processor = AutoProcessor.from_pretrained(model_path)
- Get image pairs
from PIL import Image
pre_img_path = "path/to/pre/event/image"
post_img_path = "path/to/post/event/image"
text_prompt ="""
Give change description between two satellite images.
Output answer in a news style with a few sentences using precise phrases separated by commas.
"""
pre_image = Image.open(pre_img_path)
post_image = Image.open(post_img_path)
- Inference
from qwen_vl_utils import process_vision_info
import torch
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": pre_image},
{"type": "image", "image": post_image},
{
"type": "text",
"text": text_prompt,
},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, _ = process_vision_info(messages)
inputs = processor(
text=[text], images=image_inputs, padding=True, return_tensors="pt"
).to("cuda", torch.bfloat16)
# Generate captions for the input image pair
generated_ids = model.generate(
**inputs,
max_new_tokens=512,
# temperature=TEMPERATURE
)
generated_ids_trimmed = [
out_ids[len(in_ids) :]
for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
captions = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False,
)
change_caption = captions[0]
π Citation
If you find this repository helpful, feel free to cite our paper:
@misc{chen2025rscclargescaleremotesensing,
title={RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events},
author={Zhenyuan Chen and Chenxi Wang and Ningyu Zhang and Feng Zhang},
year={2025},
eprint={2509.01907},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.01907},
}
@article{qwen2.5vl,
title={Qwen2.5-VL Technical Report},
url={http://arxiv.org/abs/2502.13923},
DOI={10.48550/arXiv.2502.13923},
author={Bai, Shuai and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Song, Sibo and Dang, Kai and Wang, Peng and Wang, Shijie and Tang, Jun and Zhong, Humen and Zhu, Yuanzhi and Yang, Mingkun and Li, Zhaohai and Wan, Jianqiang and Wang, Pengfei and Ding, Wei and Fu, Zheren and Xu, Yiheng and Ye, Jiabo and Zhang, Xi and Xie, Tianbao and Cheng, Zesen and Zhang, Hang and Yang, Zhibo and Xu, Haiyang and Lin, Junyang},
year={2025},
month=feb
}
Licensing Information
The dataset is released under the CC-BY-4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
π Acknowledgement
Our RSCC dataset is built based on xBD and EBD datasets.
We are thankful to Kimi-VL, BLIP-3, Phi-4-Multimodal, Qwen2-VL, Qwen2.5-VL, LLaVA-NeXT-Interleave,LLaVA-OneVision, InternVL 3, Pixtral, TEOChat and CCExpert for releasing their models and code as open-source contributions.
The metrics implements are derived from huggingface/evaluate.
The training implements are derived from QwenLM/Qwen2.5-VL.
- Downloads last month
- 7