Update README.md
Browse files
README.md
CHANGED
|
@@ -59,6 +59,15 @@ pipeline_tag: text-generation
|
|
| 59 |
**Overview**
|
| 60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
| 61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
**Scripts**
|
| 63 |
- Prepare evaluation environments:
|
| 64 |
```
|
|
@@ -72,15 +81,6 @@ git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
|
|
| 72 |
cd lm-evaluation-harness
|
| 73 |
```
|
| 74 |
|
| 75 |
-
**Main Results**
|
| 76 |
-
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
| 77 |
-
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
| 78 |
-
| llama-65b-instruct (***Ours***, ***Local Reproduction***) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
|
| 79 |
-
| llama-30b-instruct-2048 (***Ours***, ***Open LLM Leaderboard***) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
|
| 80 |
-
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
|
| 81 |
-
| llama-30b-instruct (***Ours***, ***Open LLM Leaderboard***) | 63.2 | 56.7 | 84.0 | 59.0 | 53.1 |
|
| 82 |
-
| llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
|
| 83 |
-
|
| 84 |
## Ethical Issues
|
| 85 |
|
| 86 |
**Ethical Considerations**
|
|
|
|
| 59 |
**Overview**
|
| 60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
| 61 |
|
| 62 |
+
**Main Results**
|
| 63 |
+
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
| 64 |
+
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
| 65 |
+
| llama-65b-instruct (***Ours***, ***Local Reproduction***) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
|
| 66 |
+
| llama-30b-instruct-2048 (***Ours***, ***Open LLM Leaderboard***) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
|
| 67 |
+
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
|
| 68 |
+
| llama-30b-instruct (***Ours***, ***Open LLM Leaderboard***) | 63.2 | 56.7 | 84.0 | 59.0 | 53.1 |
|
| 69 |
+
| llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
|
| 70 |
+
|
| 71 |
**Scripts**
|
| 72 |
- Prepare evaluation environments:
|
| 73 |
```
|
|
|
|
| 81 |
cd lm-evaluation-harness
|
| 82 |
```
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
## Ethical Issues
|
| 85 |
|
| 86 |
**Ethical Considerations**
|