The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators raise ValueError( ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
Sherwin Bahmani,
Tianchang Shen,
Jiawei Ren,
Jiahui Huang,
Yifeng Jiang,
Haithem Turki,
Andrea Tagliasacchi,
David B. Lindell,
Zan Gojcic,
Sanja Fidler,
Huan Ling,
Jun Gao,
Xuanchi Ren
Dataset Description:
The PhysicalAI-SpatialIntelligence-Lyra-SDG Dataset is a multi-view 3D and 4D dataset generated using GEN3C. The 3D reconstruction setup uses 59,031 images, while the 4D setup has 7,378 videos. All the data are from diverse text prompts, spanning various scenarios such as indoor and outdoor environments, humans, animals, and both realistic and imaginative content. We synthesize 6 camera trajectories for each image (3D) or video (4D), yielding 354,186 videos for the 3D and 44,268 videos for the 4D. It contains videos in RGB and camera poses and depth of the videos.
This dataset is ready for commercial use.
Dataset Owner(s):
NVIDIA Corporation
Dataset Creation Date:
2025/09/23
License/Terms of Use:
This dataset is licensed under the Creative Commons Attribution 4.0 International License (CC-BY-4.0).
Intended Usage:
Researchers and academics working in spatial intelligence problems can use it to train AI models for multi-view video generation or reconstruction.
Dataset Characterization:
** Data Collection Method
[Synthetic]
** Labeling Method
[Synthetic]
Dataset Format:
RGB in mp4, Camera pose in .npz, Depth in zip format
Dataset Quantification:
The 3D reconstruction setup has 59,031 multi-view examples, while the 4D setup has 7,378 multi-view examples. For each multi-view example, we have 6 views. For each view, we have videos in Red, Green, Blue (RGB) and camera poses and depth of the videos.
Field | Format |
---|---|
Video | mp4 |
Camera pose | .npz |
Depth | .zip |
Storage: 25TB
Sample Usage
Lyra supports both images and videos as input for 3D Gaussian generation. First, you need to download the demo samples:
# Download test samples from Hugging Face
huggingface-cli download nvidia/Lyra-Testing-Example --repo-type dataset --local-dir assets/demo
Example 1: Single Image to 3D Gaussians Generation
- Generate multi-view video latents from the input image using scripts/bash/static_sdg.sh.
CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=1 cosmos_predict1/diffusion/inference/gen3c_single_image_sdg.py \
--checkpoint_dir checkpoints \
--num_gpus 1 \
--input_image_path assets/demo/static/diffusion_input/images/00172.png \
--video_save_folder assets/demo/static/diffusion_output_generated \
--foreground_masking \
--multi_trajectory
- Reconstruct multi-view video latents with the 3DGS decoder:
accelerate launch sample.py --config configs/demo/lyra_static.yaml
Example 2: Single Video to Dynamic 3D Gaussians Generation
- Generate multi-view video latents from the input video and ViPE estimated depth using scripts/bash/dynamic_sdg.sh.
CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=1 cosmos_predict1/diffusion/inference/gen3c_dynamic_sdg.py \
--checkpoint_dir checkpoints \
--vipe_path assets/demo/dynamic/diffusion_input/rgb/6a71ee0422ff4222884f1b2a3cba6820.mp4 \
--video_save_folder assets/demo/dynamic/diffusion_output \
--disable_prompt_upsampler \
--num_gpus 1 \
--foreground_masking \
--multi_trajectory
- Reconstruct multi-view video latents with the 3DGS decoder:
accelerate launch sample.py --config configs/demo/lyra_dynamic.yaml
Training
To train, you need to download the full training data (this dataset) from Hugging Face:
# Download our training datasets from Hugging Face and untar them into a static/dynamic folder
huggingface-cli download nvidia/PhysicalAI-SpatialIntelligence-Lyra-SDG --repo-type dataset --local-dir lyra_dataset/tar
Then you can use the provided progressive training script (as detailed in the GitHub repository):
bash train.sh
For more detailed usage instructions, including how to test on your own videos or perform training, please refer to the Lyra GitHub repository.
Reference(s):
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns here.
Citation
@inproceedings{bahmani2025lyra,
title={Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation},
author={Bahmani, Sherwin and Shen, Tianchang and Ren, Jiawei and Huang, Jiahui and Jiang, Yifeng and
Turki, Haithem and Tagliasacchi, Andrea and Lindell, David B. and Gojcic, Zan and Fidler, Sanja and
Ling, Huan and Gao, Jun and Ren, Xuanchi},
booktitle={arXiv preprint arXiv:2509.19296},
year={2025}
}
@inproceedings{ren2025gen3c,
title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control},
author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and
Lu, Yifan and Nimier-David, Merlin and M\u00fcller, Thomas and Keller, Alexander and
Fidler, Sanja and Gao, Jun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}
- Downloads last month
- 3,262