nielsr HF Staff commited on
Commit
4ba916d
·
verified ·
1 Parent(s): 0bacefa

Improve dataset card with metadata and description

Browse files

This PR improves the dataset card by adding essential metadata, a more detailed description, and clarifying the structure. It uses information from the Github README to enhance the description. The license is assumed to be MIT; please verify and correct if necessary.

Files changed (1) hide show
  1. README.md +45 -90
README.md CHANGED
@@ -1,91 +1,46 @@
1
-
2
- # FedMABench: Benchmarking Mobile Agents on Decentralized Heterogeneous User Data
3
-
4
-
5
-
6
- ## Overview
7
- **FedMABench** is an open-source benchmark for federated training and evaluation of *mobile agents*, specifically designed for heterogeneous scenarios.
8
-
9
- FedMABench includes the following key features:
10
- - 6 **datasets** with 30+ subsets (over 800 apps across 5 categories)
11
- - 3 types of **heterogeneity** (e.g., *App Category Distribution*, *Specific App Preference*, etc.)
12
- - 8 **federated learning** algorithms (e.g., *FedAvg*, *FedProx*, *SCAFFOLD*, *FedAvgM*, etc.).
13
- - 10+ **base models** covering *Qwen2-VL-2B/7B-Instruct*, *InternVL2-1B/2B/4B/8B*, *DeepseekVL2* and more.
14
-
15
- ![intro](FedMABench_Overview.pdf)
16
-
17
- ## Setup
18
-
19
- Clone the repo, submodules and install the required packages.
20
-
21
- ```
22
- git clone --recursive --shallow-submodules https://github.com/wwh0411/FedMABench.git
23
- cd FedMABench
24
- conda create -n fedma python=3.10
25
- conda activate fedma
26
- pip install -r requirements.txt
27
- ```
28
-
29
- ## Data Directory Tree
30
- The six datasets of FedMABench are organized in seperate directories.
31
- We also provide test sets and temporary split files for reference.
32
-
33
- ```
34
- .
35
- ├── App-Level
36
- │   ├── App Half-Skew.jsonl
37
- │   ├── App IID.jsonl
38
- │   ├── App Non-Uniform.jsonl
39
- │   ├── App Skew.jsonl
40
- │   ├── App-Level Val.jsonl
41
- │   ├── train_split.json
42
- │   └── val_split.json
43
- ├── Basic-AC
44
- │   ├── Basic-AC Entertainment.jsonl
45
- │   ├── Basic-AC Lives.jsonl
46
- │   ├── Basic-AC Office.jsonl
47
- │   ├── Basic-AC Shopping.jsonl
48
- │   ├── Basic-AC Traveling.jsonl
49
- │   ├── Basic-AC c10n1000.jsonl
50
- │   ├── Basic-AC c10n200.jsonl
51
- │   ├── Basic-AC c10n3000.jsonl
52
- │   ├── Basic-AC c10n500.jsonl
53
- │   ├── Basic-AC c10n5000.jsonl
54
- │   ├── Basic-AC c10n7000.jsonl
55
- │   ├── Basic-AC c30n3000.jsonl
56
- │   ├── Basic-AC c50n5000.jsonl
57
- │   ├── Basic-AC c70n7000.jsonl
58
- │   ├── Val_100.jsonl
59
- │   └── val_100.json
60
- ├── Basic-AitW
61
- │   ├── Basic-AitW G-Apps.jsonl
62
- │   ├── Basic-AitW General.jsonl
63
- │   ├── Basic-AitW Install.jsonl
64
- │   ├── Basic-AitW Single.jsonl
65
- │   └── Basic-AitW WebShopping.jsonl
66
- ├── Category-Level
67
- │   ├── App Random.jsonl
68
- │   ├── App Skew.jsonl
69
- │   ├── Category Half-Skew.jsonl
70
- │   ├── Category IID.jsonl
71
- │   ├── Category Non-Unifrom.jsonl
72
- │   ├── Category Skew.jsonl
73
- │   ├── Category Val.jsonl
74
- │   ├── train_split.json
75
- │   └── val_split.json
76
- ├── ScaleApp
77
- │   ├── ScaleApp IID.jsonl
78
- │   ├── ScaleApp Random.jsonl
79
- │   ├── ScaleApp Skew.jsonl
80
- │   ├── ScaleApp Val_250.jsonl
81
- │   ├── train_split.json
82
- │   └── val_split.json
83
- ├── Step-Episode
84
- │   ├── Both Skew.jsonl
85
- │   ├── Episode Skew.jsonl
86
- │   ├── Step Skew.jsonl
87
- │   ├── Step-Episode IID.jsonl
88
- │   ├── Step-Episode Val.jsonl
89
- │   └── val_split.json
90
- └── README.md
91
  ```
 
1
+ ---
2
+ datasets:
3
+ - HuggingFaceH4/ultrafeedback_binarized
4
+ language:
5
+ - en
6
+ library_name: transformers
7
+ license: mit
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+ # Llama-3-Base-8B-DICE-Iter2
12
+
13
+ This model was developed using [Bootstrapping Language Models with DPO Implicit Rewards](https://arxiv.org/abs/2406.09760) (DICE) at iteration 2, based on the [princeton-nlp/Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) architecture as the starting point.
14
+
15
+ ## Links to Other Models
16
+ - [Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1)
17
+ - [Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2)
18
+
19
+ ## Model Description
20
+
21
+ - Model type: An 8B parameter GPT-like model fine-tuned on synthetic datasets.
22
+ - Language(s) (NLP): Primarily English
23
+ - License: MIT
24
+ - Fine-tuned from model: princeton-nlp/Llama-3-Base-8B-SFT-DPO
25
+
26
+ ## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/)
27
+
28
+ | Model | LC. Win Rate | Win Rate |
29
+ |-------------------------------------------|:------------:|:--------:|
30
+ |[Llama-3-Base-8B-SFT-DPO](https://huggingface.co/princeton-nlp/Llama-3-Base-8B-SFT-DPO) |18.20 |15.50
31
+ |[Llama-3-Base-8B-DICE-Iter1](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter1) |25.08 |25.77
32
+ |[Llama-3-Base-8B-DICE-Iter2](https://huggingface.co/sail/Llama-3-Base-8B-DICE-Iter2) |**27.55** |**30.99**
33
+
34
+ ## Code
35
+ https://github.com/sail-sg/dice
36
+
37
+ ## Citation
38
+
39
+ ```bibtex
40
+ @article{chen2024bootstrapping,
41
+ title={Bootstrapping Language Models with DPO Implicit Rewards},
42
+ author={Chen, Changyu and Liu, Zichen and Du, Chao and Pang, Tianyu and Liu, Qian and Sinha, Arunesh and Varakantham, Pradeep and Lin, Min},
43
+ journal={arXiv preprint arXiv:2406.09760},
44
+ year={2024}
45
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ```