A newer version of the Gradio SDK is available:
6.3.0
title: HY-Motion-1.0
emoji: π
colorFrom: purple
colorTo: red
sdk: gradio
sdk_version: 4.44.0
app_file: gradio_app.py
pinned: false
short_description: Text-to-3D and Image-to-3D Generation
HY-Motion 1.0: Scaling Flow Matching Models for 3D Motion Generation
π₯ News
- Dec 30, 2025: π€ We released the inference code and pretrained models of HY-Motion 1.0. Please give it a try via our HuggingFace Space and our Official Site!
Introduction
HY-Motion 1.0 is a series of text-to-3D human motion generation models based on Diffusion Transformer (DiT) and Flow Matching. It allows developers to generate skeleton-based 3D character animations from simple text prompts, which can be directly integrated into various 3D animation pipelines. This model series is the first to scale DiT-based text-to-motion models to the billion-parameter level, achieving significant improvements in instruction-following capabilities and motion quality over existing open-source models.
Key Features
State-of-the-Art Performance: Achieves state-of-the-art performance in both instruction-following capability and generated motion quality.
Billion-Scale Models: We are the first to successfully scale DiT-based models to the billion-parameter level for text-to-motion generation. This results in superior instruction understanding and following capabilities, outperforming comparable open-source models.
Advanced Three-Stage Training: Our models are trained using a comprehensive three-stage process:
Large-Scale Pre-training: Trained on over 3,000 hours of diverse motion data to learn a broad motion prior.
High-Quality Fine-tuning: Fine-tuned on 400 hours of curated, high-quality 3D motion data to enhance motion detail and smoothness.
Reinforcement Learning: Utilizes Reinforcement Learning from human feedback and reward models to further refine instruction-following and motion naturalness.
π BibTeX
If you found this repository helpful, please cite our reports:
@article{hymotion2025,
title={HY-Motion 1.0: Scaling Flow Matching Models for Text-To-Motion Generation},
author={Tencent Hunyuan 3D Digital Human Team},
journal={arXiv preprint arXiv:2512.23464},
year={2025}
}
Acknowledgements
We would like to thank the contributors to the FLUX, diffusers, HuggingFace, SMPL/SMPLH, CLIP, Qwen3, PyTorch3D, kornia, transforms3d, FBX-SDK, GVHMR, and HunyuanVideo repositories or tools, for their open research and exploration.