RoboChallenge Dataset
Tasks and Embodiments
The dataset includes 30 diverse manipulation tasks (Table30) across 4 embodiments:
Available Tasks
arrange_flowersarrange_fruits_in_basketarrange_paper_cupsclean_dining_tablefold_dishclothhang_toothbrush_cupmake_vegetarian_sandwichmove_objects_into_boxopen_the_drawerplace_shoes_on_rackplug_in_network_cablepour_fries_into_platepress_three_buttonsput_cup_on_coasterput_opener_in_drawerput_pen_into_pencil_casescan_QR_codesearch_green_boxesset_the_platesshred_scrap_papersort_bookssort_electronic_productsstack_bowlsstack_color_blocksstick_tape_to_boxsweep_the_rubbishturn_on_faucetturn_on_light_switchwater_potted_plantwipe_the_table
Embodiments
- ARX5 - Single-arm with triple camera setup (wrist + global + right-side views)
- UR5 - Single-arm with dual camera setup (wrist + global views)
- FRANKA - Single-arm with triple perspective setup (wrist + main + side views)
- ALOHA - Dual-arm with triple wrist camera setup (left wrist + right wrist + global views)
Dataset Structure
Hierarchy
The dataset is organized by tasks, with each task containing multiple demonstration episodes:
.
βββ <task_name>/ # e.g., arrange_flowers, fold_dishcloth
β βββ task_desc.json # Task description
β βββ meta/ # Task-level metadata
β β βββ task_info.json
β βββ data/ # Episode data
β βββ episode_000000/ # Individual episode
β β βββ meta/
β β β βββ episode_meta.json # Episode metadata
β β βββ states/
β β β # for single-arm (ARX5, UR5, Franka)
β β β βββ states.jsonl # Single-arm robot states
β β β # for dual-arm (ALOHA)
β β β βββ left_states.jsonl # Left arm states
β β β βββ right_states.jsonl # Right arm states
β β βββ videos/
β β # Video configurations vary by robot model:
β β # ARX5
β β βββ arm_realsense_rgb.mp4 # Wrist view
β β βββ global_realsense_rgb.mp4 # Global view
β β βββ right_realsense_rgb.mp4 # Side view
β β # UR5
β β βββ global_realsense_rgb.mp4 # Global view
β β βββ handeye_realsense_rgb.mp4 # Wrist view
β β # Franka
β β βββ handeye_realsense_rgb.mp4 # Wrist view
β β βββ main_realsense_rgb.mp4 # Global view
β β βββ side_realsense_rgb.mp4 # Side view
β β # ALOHA
β β βββ cam_high_rgb.mp4 # Global view
β β βββ cam_wrist_left_rgb.mp4 # Left wrist view
β β βββ cam_wrist_right_rgb.mp4 # Right wrist view
β βββ episode_000001/
β βββ ...
βββ convert_to_lerobot.py # Conversion script
βββ README.md
Metadata Schema
task_info.json
{
"robot_id": "arx5_1", // Robot model identifier
"task_desc": {
"task_name": "arrange_flowers", // Task identifier
"prompt": "insert the three flowers on the table into the vase one by one",
"scoring": "...", // Scoring criteria
"task_tag": [ // Task characteristics
"repeated",
"single-arm",
"ARX5",
"precise3d"
]
},
"video_info": {
"fps": 30, // Video frame rate
"ext": "mp4", // Video format
"encoding": {
"vcodec": "libx264", // Video codec
"pix_fmt": "yuv420p" // Pixel format
}
}
}
episode_meta.json
{
"episode_index": 0, // Episode number
"start_time": 1750405586.3430033, // Unix timestamp (start)
"end_time": 1750405642.5247612, // Unix timestamp (end)
"frames": 1672 // Total video frames
}
Robot States Schema
Each episode contains states data stored in JSONL format. Depending on the embodiment, the structure differs slightly:
- Single-arm robots (ARX5, UR5, Franka) β
states.jsonl - Dual-arm robots (ALOHA) β
left_states.jsonlandright_states.jsonl
Each file records the robotβs proprioceptive signals per frame, including joint angles, end-effector poses, gripper states, and timestamps. The exact field definitions and coordinate conventions vary by platform, as summarized below.
ARX5
| Data Name | Data Key | Shape | Semantics |
|---|---|---|---|
| Joint control | joint_positions | (6,) | Joint angle (in radians) from the base to the end effector. |
| Pose control | ee_positions | (6,) | End effector pose (tx, ty, tz, roll, pitch, yaw), where (roll, pitch, yaw) is relative euler angles from the arm base coordinate. X : back to front; Y: right to left; Z: down to up. |
| Gripper control | gripper | (1,) | Actual gripper width measurement in meter. |
| Time stamp | timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |
UR5
| Data Name | Data Key | Shape | Semantics |
|---|---|---|---|
| Joint control | joint_positions | (6,) | Joint angle (in radians) from the base to the end effector. |
| Pose control | ee_positions | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : front to back; Y: left to right; Z: down to up. |
| Gripper control | gripper | (1,) | Gripper closing angle, 0 for fully open, 255 for fully closed. |
| Time stamp | timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |
Franka
| Data Name | Data Key | Shape | Semantics |
|---|---|---|---|
| Joint control | joint_positions | (7,) | Joint angle (in radians) from the base to the end effector. |
| Pose control | ee_positions | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : back to front; Y: right to left; Z: down to up. |
| Gripper control | gripper | (2,) | Gripper trigger signals in the (close_button, open_button) order. |
| Gripper width | gripper_width | (1,) | Actual gripper width measurement |
| Time stamp | timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |
ALOHA
| Data Name | Data Key | Shape | Semantics |
|---|---|---|---|
| Master joint control | joint_positions | (6,) | Maste joint angle (in radians) from the base to the end effector. |
| Joint velocity | joint_vel | (7,) | Speed of 6 joint and gripper |
| Puppet joint control | qpos | (6,) | Puppet joint angle (in radians) from the base to the end effector. |
| Puppet pose control | ee_pose_quaternion | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : back to front; Y: right to left ; Z: down to up. |
| Puppet pose control | ee_pose_rpy | (6,) | End effector pose (tx, ty, tz, rr, rp, ry), where (tx, ty, tz) is relative position from the arm base coordinate , (rr, rp, ry) is euler (in radians). X : back to front; Y: right to left ; Z: down to up. |
| Gripper control | gripper | (1,) | Actual gripper width measurement in meter. |
| Time stamp | timestamp | (1,) | Floating point timestamp (in mileseconds) of each frame. |
Convert to LeRobot
While you can implement a custom Dataset class to read RoboChallenge data directly, we strongly recommend converting to LeRobot format to take advantage of LeRobot's comprehensive data processing and loading utilities.
The example script convert_to_lerobot.py converts ARX5 data to the LeRobot dataset as a example. For other robot embodiments (UR5, Franka, ALOHA), you can adapt the script accordingly.
Prerequisites
- Python 3.9+ with the following packages:
lerobot==0.1.0opencv-pythonnumpy
- Configure
$LEROBOT_HOME(defaults to~/.lerobotif unset).
pip install lerobot==0.1.0 opencv-python numpy
export LEROBOT_HOME="/path/to/lerobot_home"
Usage
Run the converter from the repository root (or provide an absolute path):
python convert_to_lerobot.py \
--repo-name example_repo \
--raw-dataset /path/to/example_dataset \
--frame-interval 1
Output
- Frames and metadata are saved to
$LEROBOT_HOME/<repo-name>. - At the end, the script calls
dataset.consolidate(run_compute_stats=False). If you require aggregated statistics, run it withrun_compute_stats=Trueor execute a separate stats job.