nielsr HF Staff commited on
Commit
e78ce88
Β·
verified Β·
1 Parent(s): 26a1b95

Improve dataset card for OceanGym: add paper abstract, code link, and update metadata & citation

Browse files

This PR significantly enhances the dataset card for OceanGym by:

- Adding `task_categories: ['robotics']` to the metadata for better discoverability.
- Enriching the `tags` with `['robotics', 'benchmark', 'environment', 'underwater', 'multi-modal', 'mllm', 'large-language-models']` to provide more descriptive context.
- Updating the paper link to the official Hugging Face Papers page (`https://huggingface.co/papers/2509.26536`) and changing its label to "Paper".
- Adding a direct link to the GitHub repository (`https://github.com/OceanGPT/OceanGym`).
- Cleaning up the existing Google Drive link by removing the `?usp=sharing` suffix.
- Adding the Baidu Drive link from the GitHub README.
- Including the comprehensive paper abstract in the dataset card for a clearer overview.
- Updating the reference to the paper within the "Decision Task" section to point to the new Hugging Face paper link.
- Adding a new "Datasets" section, mirroring the GitHub README, to provide specific links and structure for accessing the evaluation data. The Table of Contents is also updated.
- Replacing the placeholder BibTeX citation with the complete and correct entry from the GitHub README.

These updates aim to provide a more complete, accurate, and user-friendly dataset card for `OceanGym`.

Files changed (1) hide show
  1. README.md +76 -8
README.md CHANGED
@@ -1,9 +1,18 @@
1
  ---
2
- license: mit
3
  language:
4
  - en
 
 
 
5
  tags:
6
  - agent
 
 
 
 
 
 
 
7
  ---
8
 
9
  <h1 align="center"> 🌊 OceanGym 🦾 </h1>
@@ -11,15 +20,19 @@ tags:
11
 
12
  <p align="center">
13
  🌐 <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
14
- πŸ“„ <a href="https://arxiv.org/abs/123" target="_blank">ArXiv Paper</a>
 
15
  πŸ€— <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
16
- ☁️ <a href="https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih?usp=sharing" target="_blank">Google Drive</a>
 
17
  </p>
18
 
19
  <img src="asset/img/o1.png" align=center>
20
 
21
  **OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
22
 
 
 
23
  # πŸ’ Acknowledgement
24
 
25
  OceanGym environment is based on Unreal Engine (UE) 5.3.
@@ -63,6 +76,7 @@ Thanks for their great contributions!
63
  - [⏱️ Results](#️-results)
64
  - [Decision Task](#decision-task-1)
65
  - [Perception Task](#perception-task-1)
 
66
  - [🚩 Citation](#-citation)
67
 
68
  # πŸ“Ί Quick Start
@@ -292,11 +306,11 @@ C:\Users\Windows\AppData\Local\holoocean\2.0.0\worlds\Ocean
292
 
293
  > All commands are applicable to **Windows** only, because it requires full support from the `UE5 Engine`.
294
 
295
- The decision experiment can be run with reference to the [Quick Start](#️-quick-start).
296
 
297
  ## Target Object Locations
298
 
299
- We have provided eight tasks. For specific task descriptions, please refer to the paper.
300
 
301
  The following are the coordinates for each target object in the environment (in meters):
302
 
@@ -591,6 +605,8 @@ python perception/task/init_map_with_sonar.py \
591
 
592
  # ⏱️ Results
593
 
 
 
594
  ## Decision Task
595
 
596
  <img src="asset/img/t1.png" align=center>
@@ -605,14 +621,66 @@ python perception/task/init_map_with_sonar.py \
605
  - Values represent accuracy percentages.
606
  - Adding sonar means using both RGB and sonar images.
607
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
608
  # 🚩 Citation
609
 
610
  If this OceanGym paper or benchmark is helpful, please kindly cite as this:
611
 
612
  ```bibtex
613
- @inproceedings{xxx,
614
- title={OceanGym: A Benchmark Environment for Underwater Embodied Agents},
615
- ...
 
 
 
 
 
616
  }
617
  ```
618
 
 
1
  ---
 
2
  language:
3
  - en
4
+ license: mit
5
+ task_categories:
6
+ - robotics
7
  tags:
8
  - agent
9
+ - robotics
10
+ - benchmark
11
+ - environment
12
+ - underwater
13
+ - multi-modal
14
+ - mllm
15
+ - large-language-models
16
  ---
17
 
18
  <h1 align="center"> 🌊 OceanGym 🦾 </h1>
 
20
 
21
  <p align="center">
22
  🌐 <a href="https://oceangpt.github.io/OceanGym" target="_blank">Home Page</a>
23
+ πŸ“„ <a href="https://huggingface.co/papers/2509.26536" target="_blank">Paper</a>
24
+ πŸ’» <a href="https://github.com/OceanGPT/OceanGym" target="_blank">Code</a>
25
  πŸ€— <a href="https://huggingface.co/datasets/zjunlp/OceanGym" target="_blank">Hugging Face</a>
26
+ ☁️ <a href="https://drive.google.com/drive/folders/1H7FTbtOCKTIEGp3R5RNsWvmxZ1oZxQih" target="_blank">Google Drive</a>
27
+ ☁️ <a href="https://pan.baidu.com/s/19c-BeIpAG1EjMjXZHCAqPA?pwd=sgjs" target="_blank">Baidu Drive</a>
28
  </p>
29
 
30
  <img src="asset/img/o1.png" align=center>
31
 
32
  **OceanGym** is a high-fidelity embodied underwater environment that simulates a realistic ocean setting with diverse scenes. As illustrated in figure, OceanGym establishes a robust benchmark for evaluating autonomous agents through a series of challenging tasks, encompassing various perception analyses and decision-making navigation. The platform facilitates these evaluations by supporting multi-modal perception and providing action spaces for continuous control.
33
 
34
+ We introduce OceanGym, the first comprehensive benchmark for ocean underwater embodied agents, designed to advance AI in one of the most demanding real-world environments. Unlike terrestrial or aerial domains, underwater settings present extreme perceptual and decision-making challenges, including low visibility, dynamic ocean currents, making effective agent deployment exceptionally difficult. OceanGym encompasses eight realistic task domains and a unified agent framework driven by Multi-modal Large Language Models (MLLMs), which integrates perception, memory, and sequential decision-making. Agents are required to comprehend optical and sonar data, autonomously explore complex environments, and accomplish long-horizon objectives under these harsh conditions. Extensive experiments reveal substantial gaps between state-of-the-art MLLM-driven agents and human experts, highlighting the persistent difficulty of perception, planning, and adaptability in ocean underwater environments. By providing a high-fidelity, rigorously designed platform, OceanGym establishes a testbed for developing robust embodied AI and transferring these capabilities to real-world autonomous ocean underwater vehicles, marking a decisive step toward intelligent agents capable of operating in one of Earth's last unexplored frontiers. The code and data are available at this https URL .
35
+
36
  # πŸ’ Acknowledgement
37
 
38
  OceanGym environment is based on Unreal Engine (UE) 5.3.
 
76
  - [⏱️ Results](#️-results)
77
  - [Decision Task](#decision-task-1)
78
  - [Perception Task](#perception-task-1)
79
+ - [πŸ“š Datasets](#-datasets)
80
  - [🚩 Citation](#-citation)
81
 
82
  # πŸ“Ί Quick Start
 
306
 
307
  > All commands are applicable to **Windows** only, because it requires full support from the `UE5 Engine`.
308
 
309
+ The decision experiment can be run with reference to the [Quick Start](#-quick-start).
310
 
311
  ## Target Object Locations
312
 
313
+ We have provided eight tasks. For specific task descriptions, please refer to the [paper](https://huggingface.co/papers/2509.26536).
314
 
315
  The following are the coordinates for each target object in the environment (in meters):
316
 
 
605
 
606
  # ⏱️ Results
607
 
608
+ **We provide the trajectory data of OceanGym’s various task evaluations at the [next section](#-datasets), enabling readers to analyze and reproduce the results.**
609
+
610
  ## Decision Task
611
 
612
  <img src="asset/img/t1.png" align=center>
 
621
  - Values represent accuracy percentages.
622
  - Adding sonar means using both RGB and sonar images.
623
 
624
+ # πŸ“š Datasets
625
+ **The link to the dataset is as follows**\
626
+ ☁️ <a href="https://drive.google.com/drive/folders/1VhrvhvbWvnaS4EyeyaV1fmTQ6gPo8GCN?usp=drive_link" target="_blank">Google Drive</a>
627
+ - Decision Task
628
+
629
+ ```python
630
+ decision_dataset
631
+ β”œβ”€β”€ main
632
+ β”‚ β”œβ”€β”€ gpt4omini
633
+ β”‚ β”‚ β”œβ”€β”€ task1
634
+ β”‚ β”‚ β”‚ β”œβ”€β”€ point1
635
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ llm_output_...log
636
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ memory_...json
637
+ β”‚ β”‚ β”‚ β”‚ └── important_memory_...json
638
+ β”‚ β”‚ β”‚ └── ... (other data points like point2, point3...)
639
+ β”‚ β”‚ └── ... (other tasks like task2, task3...)
640
+ β”‚ β”œβ”€β”€ gemini
641
+ β”‚ β”‚ └── ... (structure is the same as gpt4omini)
642
+ β”‚ └── qwen
643
+ β”‚ └── ... (structure is the same as gpt4omini)
644
+ β”‚
645
+ β”œβ”€β”€ migration
646
+ β”‚ β”œβ”€β”€ gpt4o
647
+ β”‚ β”‚ └── ... (structure is the same as above)
648
+ β”‚ └── qwen
649
+ β”‚ └── ... (structure is the same as above)
650
+ β”‚
651
+ └── scale
652
+ β”œβ”€β”€ qwen
653
+ └── gpt4omini
654
+ ```
655
+
656
+
657
+ - Perception Task
658
+
659
+ ```python
660
+ perception_dataset
661
+ β”œβ”€β”€ data
662
+ β”‚ β”œβ”€β”€ highLight
663
+ β”‚ β”œβ”€β”€ highLightContext
664
+ β”‚ β”œβ”€β”€ lowLight
665
+ β”‚ β”œβ”€β”€ lowLightContext
666
+ β”‚
667
+ └── result
668
+
669
+ ```
670
+
671
  # 🚩 Citation
672
 
673
  If this OceanGym paper or benchmark is helpful, please kindly cite as this:
674
 
675
  ```bibtex
676
+ @misc{xue2025oceangymbenchmarkenvironmentunderwater,
677
+ title={OceanGym: A Benchmark Environment for Underwater Embodied Agents},
678
+ author={Yida Xue and Mingjun Mao and Xiangyuan Ru and Yuqi Zhu and Baochang Ren and Shuofei Qiao and Mengru Wang and Shumin Deng and Xinyu An and Ningyu Zhang and Ying Chen and Huajun Chen},
679
+ year={2025},
680
+ eprint={2509.26536},
681
+ archivePrefix={arXiv},
682
+ primaryClass={cs.CL},
683
+ url={https://arxiv.org/abs/2509.26536},
684
  }
685
  ```
686