Papers
arxiv:2509.06155

UniVerse-1: Unified Audio-Video Generation via Stitching of Experts

Published on Sep 7
· Submitted by wang on Sep 9
Authors:
,
,
,
,
,
,
,

Abstract

UniVerse-1, a unified audio-video generation model, uses a stitching of experts technique to combine pre-trained video and music models, ensuring accurate temporal alignment and producing high-quality audio-visual outputs.

AI-generated summary

We introduce UniVerse-1, a unified, Veo-3-like model capable of simultaneously generating coordinated audio and video. To enhance training efficiency, we bypass training from scratch and instead employ a stitching of experts (SoE) technique. This approach deeply fuses the corresponding blocks of pre-trained video and music generation experts models, thereby fully leveraging their foundational capabilities. To ensure accurate annotations and temporal alignment for both ambient sounds and speech with video content, we developed an online annotation pipeline that processes the required training data and generates labels during training process. This strategy circumvents the performance degradation often caused by misalignment text-based annotations. Through the synergy of these techniques, our model, after being finetuned on approximately 7,600 hours of audio-video data, produces results with well-coordinated audio-visuals for ambient sounds generation and strong alignment for speech generation. To systematically evaluate our proposed method, we introduce Verse-Bench, a new benchmark dataset. In an effort to advance research in audio-video generation and to close the performance gap with state-of-the-art models such as Veo3, we make our model and code publicly available. We hope this contribution will benefit the broader research community. Project page: https://dorniwang.github.io/UniVerse-1/.

Community

Paper author Paper submitter
edited Sep 9

Please see our project page for video demos: https://dorniwang.github.io/UniVerse-1/

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.06155 in a Space README.md to link it from this page.

Collections including this paper 3