---
base_model:
- ByteDance-Seed/BAGEL-7B-MoT
datasets:
- WeiChow/WEAVE
language:
- en
- zh
tags:
- image
- image-editing
- multimodal
license: apache-2.0
pipeline_tag: any-to-any
library_name: transformers
---
## Bagel-weave
[](https://arxiv.org/abs/2511.11434)
[](https://huggingface.co/datasets/WeiChow/Weave/)
[](https://huggingface.co/WeiChow/Bagel-weave)
[](https://github.com/weichow23/weave)
[](https://weichow23.github.io/weave/)
> 📌 This is the official Bagel implementation of
> **Weave: A Benchmark for Evaluating Multimodal Editing Models**
Bagel-weave is finetuned by [WEAVE-100k](https://huggingface.co/datasets/WeiChow/WEAVE).
## License
BAGEL-weave is licensed under the Apache 2.0 license. It is finetuned from [Bagel-7B-MOT](https://huggingface.co/ByteDance-Seed/BAGEL-7B-MoT), [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) and [siglip-so400m-14-384-flash-attn2](https://huggingface.co/HuggingFaceM4/siglip-so400m-14-384-flash-attn2) model, and uses the [FLUX.1-schnell VAE model](https://huggingface.co/black-forest-labs/FLUX.1-schnell), all under Apache 2.0.
## ✍️ Citation
```bibtex
```