--- license: apache-2.0 language: - en - zh base_model: - Qwen/Qwen2.5-14B - Qwen/Qwen2.5-14B-Instruct - Qwen/Qwen2.5-14B-Instruct-1M - tanliboy/lambda-qwen2.5-14b-dpo-test - arcee-ai/SuperNova-Medius - arcee-ai/Virtuoso-Small-v2 - Azure99/Blossom-V6-14B - Qwen/Qwen2.5-Coder-14B - Qwen/Qwen2.5-Coder-14B-Instruct - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B - huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 pipeline_tag: text-generation tags: - merge --- ![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F64e174e202fa032de4143324%2Fzx2LWe9rip2AVr76BH4Er.png) # Qwen2.5-14B-YOYO-V4-p3 *[Qwen2.5-YOYO Fourth-Gen Model Officially Released!](https://huggingface.co/YOYO-AI/Qwen2.5-14B-YOYO-V4)* This is the **final preview version** of the fourth-generation Qwen YOYO series model, and it is currently my favorite iteration. Aside from context length, it is identical to **Qwen2.5-14B-YOYO-V4** in all other aspects. The upcoming official release will not only expand the context length to 1M tokens but also disclose the complete merge recipe. Stay tuned for more updates!