Improve dataset card: Add task categories, GitHub link, comprehensive sample usage, and clarify license (#1)
Browse files- Improve dataset card: Add task categories, GitHub link, comprehensive sample usage, and clarify license (3807333834a1c991ed2e4544998c7ebb10eb39d4)
Co-authored-by: Niels Rogge <[email protected]>
    	
        README.md
    CHANGED
    
    | @@ -1,9 +1,18 @@ | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            license: cc-by-4.0
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 3 | 
             
            ---
         | 
|  | |
| 4 | 
             
            # Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
         | 
| 5 |  | 
| 6 | 
            -
            **[Paper](https://arxiv.org/abs/2509.19296), [Project Page](https://research.nvidia.com/labs/toronto-ai/lyra/)**
         | 
| 7 |  | 
| 8 | 
             
            [Sherwin Bahmani](https://sherwinbahmani.github.io/),
         | 
| 9 | 
             
            [Tianchang Shen](https://www.cs.toronto.edu/~shenti11/),
         | 
| @@ -15,8 +24,8 @@ license: cc-by-4.0 | |
| 15 | 
             
            [David B. Lindell](https://davidlindell.com/),
         | 
| 16 | 
             
            [Zan Gojcic](https://zgojcic.github.io/),
         | 
| 17 | 
             
            [Sanja Fidler](https://www.cs.utoronto.ca/~fidler/),
         | 
| 18 | 
            -
            [Huan Ling](https://www.cs. | 
| 19 | 
            -
            [Jun Gao](https://www.cs. | 
| 20 | 
             
            [Xuanchi Ren](https://xuanchiren.com/) <br>
         | 
| 21 |  | 
| 22 | 
             
            ## Dataset Description:
         | 
| @@ -34,8 +43,7 @@ NVIDIA Corporation | |
| 34 | 
             
            2025/09/23
         | 
| 35 |  | 
| 36 | 
             
            ## License/Terms of Use: 
         | 
| 37 | 
            -
             | 
| 38 | 
            -
            https://docs.google.com/spreadsheets/d/1e1K8nsMV9feowjmgXhdfa0qo-oGJNlnsBc1Qhwck7vU/edit?usp=sharing
         | 
| 39 |  | 
| 40 | 
             
            ## Intended Usage:
         | 
| 41 | 
             
            Researchers and academics working in spatial intelligence problems can use it to train AI models for multi-view video generation or reconstruction.
         | 
| @@ -63,12 +71,79 @@ For each view, we have videos in Red, Green, Blue (RGB) and camera poses and dep | |
| 63 |  | 
| 64 | 
             
            Storage: 25TB
         | 
| 65 |  | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 66 | 
             
            ## Reference(s):
         | 
| 67 |  | 
| 68 | 
            -
             | 
| 69 |  | 
| 70 | 
             
            ## Ethical Considerations:
         | 
| 71 | 
            -
            NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. | 
| 72 |  | 
| 73 | 
             
            Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
         | 
| 74 |  | 
| @@ -88,9 +163,9 @@ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www. | |
| 88 | 
             
            @inproceedings{ren2025gen3c,
         | 
| 89 | 
             
                title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control},
         | 
| 90 | 
             
                author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and
         | 
| 91 | 
            -
                    Lu, Yifan and Nimier-David, Merlin and  | 
| 92 | 
             
                    Fidler, Sanja and Gao, Jun},
         | 
| 93 | 
             
                booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
         | 
| 94 | 
             
                year={2025}
         | 
| 95 | 
             
            }
         | 
| 96 | 
            -
            ```
         | 
|  | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            license: cc-by-4.0
         | 
| 3 | 
            +
            task_categories:
         | 
| 4 | 
            +
            - image-to-3d
         | 
| 5 | 
            +
            - text-to-3d
         | 
| 6 | 
            +
            tags:
         | 
| 7 | 
            +
            - 3d-reconstruction
         | 
| 8 | 
            +
            - gaussian-splatting
         | 
| 9 | 
            +
            - video-diffusion
         | 
| 10 | 
            +
            - synthetic-data
         | 
| 11 | 
             
            ---
         | 
| 12 | 
            +
             | 
| 13 | 
             
            # Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation
         | 
| 14 |  | 
| 15 | 
            +
            **[Paper](https://arxiv.org/abs/2509.19296), [Project Page](https://research.nvidia.com/labs/toronto-ai/lyra/), [Code](https://github.com/nv-tlabs/lyra)**
         | 
| 16 |  | 
| 17 | 
             
            [Sherwin Bahmani](https://sherwinbahmani.github.io/),
         | 
| 18 | 
             
            [Tianchang Shen](https://www.cs.toronto.edu/~shenti11/),
         | 
|  | |
| 24 | 
             
            [David B. Lindell](https://davidlindell.com/),
         | 
| 25 | 
             
            [Zan Gojcic](https://zgojcic.github.io/),
         | 
| 26 | 
             
            [Sanja Fidler](https://www.cs.utoronto.ca/~fidler/),
         | 
| 27 | 
            +
            [Huan Ling](https://www.cs.utoronto.ca/~linghuan/),
         | 
| 28 | 
            +
            [Jun Gao](https://www.cs.utoronto.ca/~jungao/),
         | 
| 29 | 
             
            [Xuanchi Ren](https://xuanchiren.com/) <br>
         | 
| 30 |  | 
| 31 | 
             
            ## Dataset Description:
         | 
|  | |
| 43 | 
             
            2025/09/23
         | 
| 44 |  | 
| 45 | 
             
            ## License/Terms of Use: 
         | 
| 46 | 
            +
            This dataset is licensed under the [Creative Commons Attribution 4.0 International License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/).
         | 
|  | |
| 47 |  | 
| 48 | 
             
            ## Intended Usage:
         | 
| 49 | 
             
            Researchers and academics working in spatial intelligence problems can use it to train AI models for multi-view video generation or reconstruction.
         | 
|  | |
| 71 |  | 
| 72 | 
             
            Storage: 25TB
         | 
| 73 |  | 
| 74 | 
            +
            ## Sample Usage
         | 
| 75 | 
            +
             | 
| 76 | 
            +
            Lyra supports both images and videos as input for 3D Gaussian generation. First, you need to download the demo samples:
         | 
| 77 | 
            +
             | 
| 78 | 
            +
            ```bash
         | 
| 79 | 
            +
            # Download test samples from Hugging Face
         | 
| 80 | 
            +
            huggingface-cli download nvidia/Lyra-Testing-Example --repo-type dataset --local-dir assets/demo
         | 
| 81 | 
            +
            ```
         | 
| 82 | 
            +
             | 
| 83 | 
            +
            ### Example 1: Single Image to 3D Gaussians Generation
         | 
| 84 | 
            +
             | 
| 85 | 
            +
            1) Generate multi-view video latents from the input image using scripts/bash/static_sdg.sh. 
         | 
| 86 | 
            +
             | 
| 87 | 
            +
            ```bash
         | 
| 88 | 
            +
            CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=1 cosmos_predict1/diffusion/inference/gen3c_single_image_sdg.py \
         | 
| 89 | 
            +
                --checkpoint_dir checkpoints \
         | 
| 90 | 
            +
                --num_gpus 1 \
         | 
| 91 | 
            +
                --input_image_path assets/demo/static/diffusion_input/images/00172.png \
         | 
| 92 | 
            +
                --video_save_folder assets/demo/static/diffusion_output_generated \
         | 
| 93 | 
            +
                --foreground_masking \
         | 
| 94 | 
            +
                --multi_trajectory
         | 
| 95 | 
            +
            ```
         | 
| 96 | 
            +
             | 
| 97 | 
            +
            2) Reconstruct multi-view video latents with the 3DGS decoder:
         | 
| 98 | 
            +
             | 
| 99 | 
            +
            ```bash
         | 
| 100 | 
            +
            accelerate launch sample.py --config configs/demo/lyra_static.yaml
         | 
| 101 | 
            +
            ```
         | 
| 102 | 
            +
             | 
| 103 | 
            +
            ### Example 2: Single Video to Dynamic 3D Gaussians Generation
         | 
| 104 | 
            +
             | 
| 105 | 
            +
            1) Generate multi-view video latents from the input video and ViPE estimated depth using scripts/bash/dynamic_sdg.sh.
         | 
| 106 | 
            +
             | 
| 107 | 
            +
            ```bash
         | 
| 108 | 
            +
            CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) torchrun --nproc_per_node=1 cosmos_predict1/diffusion/inference/gen3c_dynamic_sdg.py \
         | 
| 109 | 
            +
                --checkpoint_dir checkpoints \
         | 
| 110 | 
            +
                --vipe_path assets/demo/dynamic/diffusion_input/rgb/6a71ee0422ff4222884f1b2a3cba6820.mp4 \
         | 
| 111 | 
            +
                --video_save_folder assets/demo/dynamic/diffusion_output \
         | 
| 112 | 
            +
                --disable_prompt_upsampler \
         | 
| 113 | 
            +
                --num_gpus 1 \
         | 
| 114 | 
            +
                --foreground_masking \
         | 
| 115 | 
            +
                --multi_trajectory
         | 
| 116 | 
            +
            ```
         | 
| 117 | 
            +
             | 
| 118 | 
            +
            2) Reconstruct multi-view video latents with the 3DGS decoder:
         | 
| 119 | 
            +
             | 
| 120 | 
            +
            ```bash
         | 
| 121 | 
            +
            accelerate launch sample.py --config configs/demo/lyra_dynamic.yaml
         | 
| 122 | 
            +
            ```
         | 
| 123 | 
            +
             | 
| 124 | 
            +
            ### Training
         | 
| 125 | 
            +
             | 
| 126 | 
            +
            To train, you need to download the full training data (this dataset) from Hugging Face:
         | 
| 127 | 
            +
             | 
| 128 | 
            +
            ```bash
         | 
| 129 | 
            +
            # Download our training datasets from Hugging Face and untar them into a static/dynamic folder
         | 
| 130 | 
            +
            huggingface-cli download nvidia/PhysicalAI-SpatialIntelligence-Lyra-SDG --repo-type dataset --local-dir lyra_dataset/tar
         | 
| 131 | 
            +
            ```
         | 
| 132 | 
            +
             | 
| 133 | 
            +
            Then you can use the provided progressive training script (as detailed in the GitHub repository):
         | 
| 134 | 
            +
             | 
| 135 | 
            +
            ```bash
         | 
| 136 | 
            +
            bash train.sh
         | 
| 137 | 
            +
            ```
         | 
| 138 | 
            +
             | 
| 139 | 
            +
            For more detailed usage instructions, including how to test on your own videos or perform training, please refer to the [Lyra GitHub repository](https://github.com/nv-tlabs/lyra).
         | 
| 140 | 
            +
             | 
| 141 | 
             
            ## Reference(s):
         | 
| 142 |  | 
| 143 | 
            +
            - [GEN3C](https://github.com/nv-tlabs/GEN3C)
         | 
| 144 |  | 
| 145 | 
             
            ## Ethical Considerations:
         | 
| 146 | 
            +
            NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.   
         | 
| 147 |  | 
| 148 | 
             
            Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
         | 
| 149 |  | 
|  | |
| 163 | 
             
            @inproceedings{ren2025gen3c,
         | 
| 164 | 
             
                title={GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control},
         | 
| 165 | 
             
                author={Ren, Xuanchi and Shen, Tianchang and Huang, Jiahui and Ling, Huan and
         | 
| 166 | 
            +
                    Lu, Yifan and Nimier-David, Merlin and M\u00fcller, Thomas and Keller, Alexander and
         | 
| 167 | 
             
                    Fidler, Sanja and Gao, Jun},
         | 
| 168 | 
             
                booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
         | 
| 169 | 
             
                year={2025}
         | 
| 170 | 
             
            }
         | 
| 171 | 
            +
            ```
         | 

