Datasets:
V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception
Lei Yang · Xinyu Zhang · Jun Li · Chen Wang · Jiaqi Ma · Zhiying Song · Tong Zhao · Ziying Song · Li Wang · Mo Zhou · Yang Shen · Kai Wu · Chen Lv
This is the official implementation of "V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception" (NeuIPS 2025 Spotlight).
Supported by the THU OpenMDP Lab.
📘 Dataset Summary
V2X-Radar is a large-scale cooperative perception dataset collected from complex urban intersections in mainland China. It is the first public dataset that integrates 4D imaging radar, LiDAR, and multi-view cameras across vehicle-to-everything (V2X) configurations. The dataset aims to advance multi-sensor fusion, cooperative 3D detection, and adverse-weather perception research in autonomous driving.
🧩 Supported Tasks
- 3D Object Detection (Radar/LiDAR/Camera/V2X Fusion)
- Cooperative Perception (V2V / V2I / V2X)
- Temporal Misalignment & Communication Delay Benchmarking
- Domain Adaptation and Sensor-Robust Learning
🗣️ Languages
All metadata and annotations are provided in English.
File paths and geographic identifiers are anonymized to comply with Chinese data export regulations.
📊 Dataset Structure
V2X-Radar
│ ├── V2X-Radar-I # KITTI Format
│ │ ├── training
│ │ │ ├── velodyne
│ │ │ ├── radar
│ │ │ ├── calib
│ │ │ ├── image_1
│ │ │ ├── image_2
│ │ │ ├── image_3
│ │ │ ├── label_2
│ │ ├── ImageSets
│ │ │ ├── train.txt
│ │ │ ├── trainval.txt
│ │ │ ├── val.txt
│ │ │ ├── test.txt
│ ├── V2X-Radar-V # KITTI Format
│ │ ├── training
│ │ │ ├── velodyne
│ │ │ ├── radar
│ │ │ ├── calib
│ │ │ ├── image_2
│ │ │ ├── label_2
│ │ ├── ImageSets
│ │ │ ├── train.txt
│ │ │ ├── trainval.txt
│ │ │ ├── val.txt
│ │ │ ├── test.txt
│ ├── V2X-Radar-C # OpenV2V Format
│ │ ├── train
│ │ │ ├── 2024-05-15-16-28-09
│ │ │ │ ├── -1 # RoadSide
│ │ │ │ │ ├── 00000.pcd - 00250.pcd # LiDAR point clouds from timestamp 0 to 250
│ │ │ │ │ ├── 00000_radar.pcd - 00250_radar.pcd # the 4D Radar point clouds from timestamp 0 to 250
│ │ │ │ │ ├── 00000.yaml - 00250.yaml # metadata for each timestamp
│ │ │ │ │ ├── 00000_camera0.jpg - 00250_camera0.jpg # left camera images
│ │ │ │ │ ├── 00000_camera1.jpg - 00250_camera1.jpg # front camera images
│ │ │ │ │ ├── 00000_camera2.jpg - 00250_camera2.jpg # right camera images
│ │ │ │ ├── 142 # Vehicle Side
│ │ ├── validate
│ │ ├── test
⚙️ Data Fields
| Field | Type | Description |
|---|---|---|
radar_points |
array(float) | 4D Radar point clouds (x, y, z, doppler, intensity) |
lidar_points |
array(float) | LiDAR point clouds |
images |
list(image) | Multi-view RGB frames |
calibration |
dict | Intrinsics + extrinsics |
timestamp |
float | Absolute timestamp (ms) |
annotations |
dict | 3D bounding boxes, categories and track IDs |
🧭 Data Collection and Geographic Coverage
Data were recorded in Chinese metropolitan cities using research-licensed vehicles and roadside units.
All raw sensor data underwent manual anonymization and privacy filtering (no personal identities, license plates, or facial information remain).
⚖️ Licensing Information
This dataset is released under the CC BY-NC-ND 4.0.
- Attribution — Users must credit “V2X-Radar Dataset, 2025”.
- Non-Commercial — Use for research and education only.
- No Derivatives — Do not redistribute modified versions.
Full license text: https://creativecommons.org/licenses/by-nc-nd/4.0/
🪪 Citation
@article{yang2024v2x,
title={V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception},
author={Yang, Lei and Zhang, Xinyu and Li, Jun and Wang, Chen and Ma, Jiaqi and Song, Zhiying and Zhao, Tong and Song, Ziying and Wang, Li and Zhou, Mo and Shen, Yang and Lv, Chen},
journal={Advances in Neural Information Processing Systems (NeurIPS)},
year={2025}
}
- Downloads last month
- 186