CodeGoat24's picture
Update README.md
8098019 verified
---
license: mit
base_model:
- CodeGoat24/UnifiedReward-2.0-qwen-3b
---
# UnifiedReward-Edit-qwen-7B
[2025/10/23] πŸ”₯πŸ”₯πŸ”₯ We release **UnifiedReward-Edit**-3b, a unified reward model for **both Text-to-Image and Image-to-Image generation**!!
For image editing reward task, our models support:
>1. Pairwise Rank β€” directly judge which of two edited images is better.
>
>2. Pairwise Score β€” assign a separate score to each image in a pair.
>
>3. Pointwise Score β€” rate a single image on two axes: instruction-following and overall image quality.
πŸš€ The image editing reward inference code is available at [`UnifiedReward-Edit/`](https://github.com/CodeGoat24/UnifiedReward/tree/main/UnifiedReward-Edit) directory, while T2I inference code is unchanged from previous models. The editing training data is preprocessed from [EditScore](https://huggingface.co/datasets/EditScore/EditScore-Reward-Data) and [EditReward](https://huggingface.co/datasets/TIGER-Lab/EditReward-Data) and will be released soon. We sincerely appreciate all contributors!!
For further details, please refer to the following resources:
- πŸ“° Paper: https://arxiv.org/pdf/2503.05236
- πŸͺ Project Page: https://codegoat24.github.io/UnifiedReward/
- πŸ€— Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
- πŸ€— Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
- πŸ‘‹ Point of Contact: [Yibin Wang](https://codegoat24.github.io)
## Citation
```
@article{unifiedreward,
title={Unified reward model for multimodal understanding and generation},
author={Wang, Yibin and Zang, Yuhang and Li, Hao and Jin, Cheng and Wang, Jiaqi},
journal={arXiv preprint arXiv:2503.05236},
year={2025}
}
```