Improve dataset card for OceanGym: add paper abstract, code link, and update metadata & citation
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,9 +1,18 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
- agent
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
<h1 align="center"> π OceanGym π¦Ύ </h1>
|
|
@@ -11,15 +20,19 @@ tags:
|
|
| 11 |
|
| 12 |
<p align="center">
|
| 13 |
π <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
|
| 14 |
-
π <a href="https://
|
|
|
|
| 15 |
π€ <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
|
| 16 |
-
βοΈ <a href="https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih
|
|
|
|
| 17 |
</p>
|
| 18 |
|
| 19 |
<img src="asset/img/o1.png" align=center>
|
| 20 |
|
| 21 |
**OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
|
| 22 |
|
|
|
|
|
|
|
| 23 |
# π Acknowledgement
|
| 24 |
|
| 25 |
OceanGym environment is based on Unreal Engine (UE) 5.3.
|
|
@@ -63,6 +76,7 @@ Thanks for their great contributions!
|
|
| 63 |
- [β±οΈ Results](#οΈ-results)
|
| 64 |
- [Decision Task](#decision-task-1)
|
| 65 |
- [Perception Task](#perception-task-1)
|
|
|
|
| 66 |
- [π© Citation](#-citation)
|
| 67 |
|
| 68 |
# πΊ Quick Start
|
|
@@ -292,11 +306,11 @@ C:\Users\Windows\AppData\Local\holoocean\2.0.0\worlds\Ocean
|
|
| 292 |
|
| 293 |
> All commands are applicable to **Windows** only, because it requires full support from the `UE5 Engine`.
|
| 294 |
|
| 295 |
-
The decision experiment can be run with reference to the [Quick Start](
|
| 296 |
|
| 297 |
## Target Object Locations
|
| 298 |
|
| 299 |
-
We have provided eight tasks. For specific task descriptions, please refer to the paper.
|
| 300 |
|
| 301 |
The following are the coordinates for each target object in the environment (in meters):
|
| 302 |
|
|
@@ -591,6 +605,8 @@ python perception/task/init_map_with_sonar.py \
|
|
| 591 |
|
| 592 |
# β±οΈ Results
|
| 593 |
|
|
|
|
|
|
|
| 594 |
## Decision Task
|
| 595 |
|
| 596 |
<img src="asset/img/t1.png" align=center>
|
|
@@ -605,14 +621,66 @@ python perception/task/init_map_with_sonar.py \
|
|
| 605 |
- Values represent accuracy percentages.
|
| 606 |
- Adding sonar means using both RGB and sonar images.
|
| 607 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 608 |
# π© Citation
|
| 609 |
|
| 610 |
If this OceanGym paper or benchmark is helpful, please kindly cite as this:
|
| 611 |
|
| 612 |
```bibtex
|
| 613 |
-
@
|
| 614 |
-
|
| 615 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 616 |
}
|
| 617 |
```
|
| 618 |
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: mit
|
| 5 |
+
task_categories:
|
| 6 |
+
- robotics
|
| 7 |
tags:
|
| 8 |
- agent
|
| 9 |
+
- robotics
|
| 10 |
+
- benchmark
|
| 11 |
+
- environment
|
| 12 |
+
- underwater
|
| 13 |
+
- multi-modal
|
| 14 |
+
- mllm
|
| 15 |
+
- large-language-models
|
| 16 |
---
|
| 17 |
|
| 18 |
<h1 align="center"> π OceanGym π¦Ύ </h1>
|
|
|
|
| 20 |
|
| 21 |
<p align="center">
|
| 22 |
π <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
|
| 23 |
+
π <a href="https://huggingface.co/papers/2509.26536" target="_blank">Paper</a>
|
| 24 |
+
π» <a href="https://github.com/OceanGPT/OceanGym" target="_blank">Code</a>
|
| 25 |
π€ <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
|
| 26 |
+
βοΈ <a href="https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih" target="_blank">Google Drive</a>
|
| 27 |
+
βοΈ <a href="https://pan.baidu.com/s/19c-BeIpAG1EjMjXZHCAqPA?pwd=sgjs" target="_blank">Baidu Drive</a>
|
| 28 |
</p>
|
| 29 |
|
| 30 |
<img src="asset/img/o1.png" align=center>
|
| 31 |
|
| 32 |
**OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
|
| 33 |
|
| 34 |
+
We introduce OceanGym, the first comprehensive benchmark for ocean underwater embodied agents, designed to advance AI in one of the most demanding real-world environments. Unlike terrestrial or aerial domains, underwater settings present extreme perceptual and decision-making challenges, including low visibility, dynamic ocean currents, making effective agent deployment exceptionally difficult. OceanGym encompasses eight realistic task domains and a unified agent framework driven by Multi-modal Large Language Models (MLLMs), which integrates perception, memory, and sequential decision-making. Agents are required to comprehend optical and sonar data, autonomously explore complex environments, and accomplish long-horizon objectives under these harsh conditions. Extensive experiments reveal substantial gaps between state-of-the-art MLLM-driven agents and human experts, highlighting the persistent difficulty of perception, planning, and adaptability in ocean underwater environments. By providing a high-fidelity, rigorously designed platform, OceanGym establishes a testbed for developing robust embodied AI and transferring these capabilities to real-world autonomous ocean underwater vehicles, marking a decisive step toward intelligent agents capable of operating in one of Earth's last unexplored frontiers. The code and data are available at this https URL .
|
| 35 |
+
|
| 36 |
# π Acknowledgement
|
| 37 |
|
| 38 |
OceanGym environment is based on Unreal Engine (UE) 5.3.
|
|
|
|
| 76 |
- [β±οΈ Results](#οΈ-results)
|
| 77 |
- [Decision Task](#decision-task-1)
|
| 78 |
- [Perception Task](#perception-task-1)
|
| 79 |
+
- [π Datasets](#-datasets)
|
| 80 |
- [π© Citation](#-citation)
|
| 81 |
|
| 82 |
# πΊ Quick Start
|
|
|
|
| 306 |
|
| 307 |
> All commands are applicable to **Windows** only, because it requires full support from the `UE5 Engine`.
|
| 308 |
|
| 309 |
+
The decision experiment can be run with reference to the [Quick Start](#-quick-start).
|
| 310 |
|
| 311 |
## Target Object Locations
|
| 312 |
|
| 313 |
+
We have provided eight tasks. For specific task descriptions, please refer to the [paper](https://huggingface.co/papers/2509.26536).
|
| 314 |
|
| 315 |
The following are the coordinates for each target object in the environment (in meters):
|
| 316 |
|
|
|
|
| 605 |
|
| 606 |
# β±οΈ Results
|
| 607 |
|
| 608 |
+
**We provide the trajectory data of OceanGymβs various task evaluations at the [next section](#-datasets), enabling readers to analyze and reproduce the results.**
|
| 609 |
+
|
| 610 |
## Decision Task
|
| 611 |
|
| 612 |
<img src="asset/img/t1.png" align=center>
|
|
|
|
| 621 |
- Values represent accuracy percentages.
|
| 622 |
- Adding sonar means using both RGB and sonar images.
|
| 623 |
|
| 624 |
+
# π Datasets
|
| 625 |
+
**The link to the dataset is as follows**\
|
| 626 |
+
βοΈ <a href="https://drive.google.com/drive/folders/1VhrvhvbWvnaS4EyeyaV1fmTQ6gPo8GCN?usp=drive_link" target="_blank">Google Drive</a>
|
| 627 |
+
- Decision Task
|
| 628 |
+
|
| 629 |
+
```python
|
| 630 |
+
decision_dataset
|
| 631 |
+
βββ main
|
| 632 |
+
β βββ gpt4omini
|
| 633 |
+
β β βββ task1
|
| 634 |
+
β β β βββ point1
|
| 635 |
+
β β β β βββ llm_output_...log
|
| 636 |
+
β β β β βββ memory_...json
|
| 637 |
+
β β β β βββ important_memory_...json
|
| 638 |
+
β β β βββ ... (other data points like point2, point3...)
|
| 639 |
+
β β βββ ... (other tasks like task2, task3...)
|
| 640 |
+
β βββ gemini
|
| 641 |
+
β β βββ ... (structure is the same as gpt4omini)
|
| 642 |
+
β βββ qwen
|
| 643 |
+
β βββ ... (structure is the same as gpt4omini)
|
| 644 |
+
β
|
| 645 |
+
βββ migration
|
| 646 |
+
β βββ gpt4o
|
| 647 |
+
β β βββ ... (structure is the same as above)
|
| 648 |
+
β βββ qwen
|
| 649 |
+
β βββ ... (structure is the same as above)
|
| 650 |
+
β
|
| 651 |
+
βββ scale
|
| 652 |
+
βββ qwen
|
| 653 |
+
βββ gpt4omini
|
| 654 |
+
```
|
| 655 |
+
|
| 656 |
+
|
| 657 |
+
- Perception Task
|
| 658 |
+
|
| 659 |
+
```python
|
| 660 |
+
perception_dataset
|
| 661 |
+
βββ data
|
| 662 |
+
β βββ highLight
|
| 663 |
+
β βββ highLightContext
|
| 664 |
+
β βββ lowLight
|
| 665 |
+
β βββ lowLightContext
|
| 666 |
+
β
|
| 667 |
+
βββ result
|
| 668 |
+
|
| 669 |
+
```
|
| 670 |
+
|
| 671 |
# π© Citation
|
| 672 |
|
| 673 |
If this OceanGym paper or benchmark is helpful, please kindly cite as this:
|
| 674 |
|
| 675 |
```bibtex
|
| 676 |
+
@misc{xue2025oceangymbenchmarkenvironmentunderwater,
|
| 677 |
+
title={OceanGym: A Benchmark Environment for Underwater Embodied Agents},
|
| 678 |
+
author={Yida Xue and Mingjun Mao and Xiangyuan Ru and Yuqi Zhu and Baochang Ren and Shuofei Qiao and Mengru Wang and Shumin Deng and Xinyu An and Ningyu Zhang and Ying Chen and Huajun Chen},
|
| 679 |
+
year={2025},
|
| 680 |
+
eprint={2509.26536},
|
| 681 |
+
archivePrefix={arXiv},
|
| 682 |
+
primaryClass={cs.CL},
|
| 683 |
+
url={https://arxiv.org/abs/2509.26536},
|
| 684 |
}
|
| 685 |
```
|
| 686 |
|