--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - VLA - humanoid - teleoperation - loco-manipulation - isaac-lab - unitree-g1 - psi0 configs: - config_name: default data_files: "data/**/*.parquet" --- # Psi0 Apple-to-Plate VR Teleoperation Dataset Human-demonstrated loco-manipulation trajectories for fine-tuning the [Psi0](https://github.com/physical-superintelligence-lab/Psi0) Vision-Language-Action (VLA) model on a Unitree G1 humanoid robot in Isaac Lab simulation. ## Dataset Summary | Property | Value | |---|---| | **Robot** | Unitree G1 29-DOF (dexterous hands + wholebody locomotion) | | **Task** | Pick up the apple, walk left and place the apple on the plate | | **Episodes** | 79 (filtered from 81; removed ep49=22 frames < chunk_size, ep80=metadata artifact) | | **Total frames** | 86,855 | | **Mean episode length** | 1,099 frames (36.6s) | | **Episode range** | 19.4s – 73.5s | | **FPS** | 30 Hz | | **Total duration** | ~48.3 min | | **Format** | [LeRobot v2.1](https://github.com/huggingface/lerobot) | | **Simulator** | NVIDIA Isaac Lab (Isaac Sim 4.5 + PhysX 5) | | **Scene** | `apple_to_plate` (ported from RoboCasa/MuJoCo, RTX rendering) | | **Collection method** | VR teleoperation (Pico 4 Ultra headset + WebXR controllers) | | **Locomotion** | AMO policy (System-0, RL-based, 50 Hz control rate) | | **Size on disk** | ~390 MB | ## Task Description The operator wears a Pico 4 Ultra VR headset and controls the G1 robot in Isaac Lab via WebXR controllers. The task is a multi-phase loco-manipulation sequence: 1. **Walk forward** toward the apple on the table 2. **Reach and grasp** the apple with the right hand 3. **Walk left** toward the plate 4. **Place** the apple on the plate Locomotion (forward/lateral velocity, yaw rate) is commanded via the left joystick and executed by the AMO locomotion policy. Upper-body control (arms, hands, torso) is mapped from VR controller poses via analytical IK. WebRTC H.264 video feedback from the robot's egocentric camera streams back to the VR headset at 30 FPS. ## Data Format ### Features | Feature | Type | Shape | Description | |---|---|---|---| | `observation.images.egocentric` | video (H.264) | (480, 640, 3) | RGB egocentric camera (D435i mount, 47.6° pitch down) | | `states` | float32 | (28,) | Joint positions: hand(14) + arm(14) | | `action` | float32 | (36,) | Action targets: hand(14) + arm(14) + torso(3) + height(1) + loco(4) | | `timestamp` | float32 | (1,) | Time within episode (seconds) | | `frame_index` | int64 | (1,) | Frame index within episode | | `episode_index` | int64 | (1,) | Episode index | | `index` | int64 | (1,) | Global frame index | | `task_index` | int64 | (1,) | Task index (0 for all frames) | | `next.done` | bool | (1,) | True on last frame of episode | | `obs_timestamp` | float32 | (1,) | Observation wall-clock timestamp | | `action_timestamp` | float32 | (1,) | Action wall-clock timestamp | ### Action Space (36-dim) ``` Index Name Description ───── ──── ─────────── [0:7] Left hand thumb_0/1/2, index_0/1, middle_0/1 [7:14] Right hand thumb_0/1/2, index_0/1, middle_0/1 [14:21] Left arm shoulder_pitch/roll/yaw, elbow, wrist_roll/pitch/yaw [21:28] Right arm shoulder_pitch/roll/yaw, elbow, wrist_roll/pitch/yaw [28:31] Torso roll, pitch, yaw [31] Base height target standing height (m) [32:35] Locomotion vx (forward), vy (lateral), vyaw (yaw rate) [35] Target yaw heading angle (rad) ``` ### State Space (28-dim) ``` Index Name Description ───── ──── ─────────── [0:7] Left hand joint positions (rad) [7:14] Right hand joint positions (rad) [14:21] Left arm joint positions (rad) [21:28] Right arm joint positions (rad) ``` ### Action Statistics (per joint) | Joint | Min | Max | Mean | Std | |---|---|---|---|---| | L_thumb_0 | 0.000 | 1.050 | 0.011 | 0.104 | | L_thumb_1 | 0.000 | 0.920 | 0.010 | 0.091 | | L_thumb_2 | 0.000 | 1.750 | 0.018 | 0.174 | | L_index_0 | -1.570 | 0.000 | -0.016 | 0.156 | | L_index_1 | -1.750 | 0.000 | -0.018 | 0.174 | | L_middle_0 | -1.570 | 0.000 | -0.016 | 0.156 | | L_middle_1 | -1.750 | 0.000 | -0.018 | 0.174 | | R_thumb_0 | -1.050 | 0.000 | -0.364 | 0.410 | | R_thumb_1 | -0.920 | 0.000 | -0.319 | 0.360 | | R_thumb_2 | -1.750 | 0.000 | -0.607 | 0.684 | | R_index_0 | 0.000 | 1.570 | 0.544 | 0.614 | | R_index_1 | 0.000 | 1.750 | 0.607 | 0.684 | | R_middle_0 | 0.000 | 1.570 | 0.544 | 0.614 | | R_middle_1 | 0.000 | 1.750 | 0.607 | 0.684 | | L_shoulder_p | -0.553 | 0.379 | -0.159 | 0.162 | | L_shoulder_r | -0.201 | 0.584 | 0.268 | 0.105 | | L_shoulder_y | -0.064 | 0.604 | 0.246 | 0.095 | | L_elbow | -0.740 | 1.367 | 1.126 | 0.466 | | L_wrist_r | -1.012 | 0.611 | -0.065 | 0.117 | | L_wrist_p | -1.028 | 0.146 | -0.396 | 0.176 | | L_wrist_y | -0.621 | 0.348 | -0.007 | 0.108 | | R_shoulder_p | -1.648 | 0.148 | -0.485 | 0.300 | | R_shoulder_r | -0.726 | 0.323 | -0.068 | 0.177 | | R_shoulder_y | -0.943 | 0.537 | -0.153 | 0.234 | | R_elbow | -0.586 | 1.362 | 0.368 | 0.474 | | R_wrist_r | -0.700 | 1.681 | 0.351 | 0.522 | | R_wrist_p | -1.036 | 0.571 | -0.249 | 0.329 | | R_wrist_y | -0.514 | 1.325 | 0.334 | 0.343 | | torso_roll | 0.000 | 0.000 | 0.000 | 0.000 | | torso_pitch | 0.391 | 0.727 | 0.540 | 0.058 | | torso_yaw | -0.363 | 0.317 | -0.005 | 0.075 | | base_height | 0.750 | 0.750 | 0.750 | 0.000 | | vx | -0.759 | 0.757 | 0.011 | 0.345 | | vy | -0.757 | 0.760 | 0.047 | 0.262 | | vyaw | -0.750 | 0.750 | -0.001 | 0.252 | | target_yaw | 0.000 | 0.000 | 0.000 | 0.000 | ## Collection Pipeline ``` Pico 4 Ultra (VR) ──WebXR──► teleop_bridge.py (IK) ──ZMQ──► Isaac Lab (PhysX 500Hz) ▲ │ └──────────── WebRTC H.264 30fps ◄── TiledCamera 640x480 ─────┘ │ data_recorder.py (LeRobot format) ``` - **Simulator:** NVIDIA Isaac Lab 2.3.2 (Isaac Sim 4.5, PhysX 5, RTX rendering) - **Physics:** 500 Hz internal, 50 Hz control rate (AMO substeps=10) - **Camera:** TiledCamera 640x480 RGB mounted on `head_link`, matching RealSense D435i position and angle (47.6° pitch down) - **Locomotion:** AMO policy (System-0) — RL-based, ported from IsaacGym. Inputs: 1043+2325 dim proprioception. Outputs: 15-dim leg PD targets - **Action semantics:** Absolute joint positions (not deltas) - **Normalization:** `bounds` type (see `meta/stats_psi0.json`) - **Action-observation sync:** p50 = 50ms, p95 = 66ms ## Directory Structure ``` ├── data/chunk-000/ │ ├── episode_000000.parquet │ ├── episode_000001.parquet │ └── ... (79 files) ├── videos/chunk-000/egocentric/ │ ├── episode_000000.mp4 │ ├── episode_000001.mp4 │ └── ... (79 files) ├── meta/ │ ├── info.json │ ├── episodes.jsonl │ ├── tasks.jsonl │ ├── episodes_stats.jsonl │ └── stats_psi0.json └── README.md ``` ## Intended Use This dataset is designed for fine-tuning the Psi0 VLA model using `finetune_simple_psi0_config` with `SimpleRepackTransform` and `bounds` normalization. The training pipeline expects: - Action chunk size: 30 frames - VLM backbone: frozen (Qwen3-VL-2B-Instruct) - Action expert: MM-DiT (~500M params, flow matching) - Batch size: 128 (16/GPU x 8 GPUs) - Training steps: 40,000 ## Data Cleaning This dataset was filtered from 81 to 79 episodes using `scripts/data/filter_episodes.py`: | Removed | Reason | |---|---| | Episode 49 (original) | Only 22 frames (0.7s) — shorter than `action_chunk_size=30`, would be padded during training | | Episode 80 (original) | Metadata artifact from `info.json` off-by-one in original recording pipeline | Remaining episodes were renumbered contiguously (0–78). All global indices, frame indices, and normalization stats were recomputed. Validated with `scripts/data/validate_dataset_preflight.py --strict`. ## Citation If you use this dataset, please cite both this dataset and the original Ψ₀ project: ```bibtex @misc{setubal2026psi0appletoplate, title={Psi0 Apple-to-Plate VR Teleoperation Dataset}, author={Pedro Setubal and CloudWalk Research Lab}, year={2026}, howpublished={\url{https://huggingface.co/datasets/cloudwalk-research/psi0-apple-to-plate-teleop}}, } ``` This dataset is built for fine-tuning the Ψ₀ VLA model. Please also cite the original work: ```bibtex @misc{wei2026psi0, title={$\Psi_0$: An Open Foundation Model Towards Universal Humanoid Loco-Manipulation}, author={Songlin Wei and Hongyi Jing and Boqian Li and Zhenyu Zhao and Jiageng Mao and Zhenhao Ni and Sicheng He and Jie Liu and Xiawei Liu and Kaidi Kang and Sheng Zang and Weiduo Yuan and Marco Pavone and Di Huang and Yue Wang}, year={2026}, eprint={2603.12263}, archivePrefix={arXiv}, primaryClass={cs.RO}, url={https://arxiv.org/abs/2603.12263}, } ```