Update README.md
Browse files
README.md
CHANGED
|
@@ -28,7 +28,7 @@ tags:
|
|
| 28 |
- **Model Developers:** Neural Magic
|
| 29 |
|
| 30 |
Quantized version of [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
|
| 31 |
-
It achieves an average score of
|
| 32 |
|
| 33 |
### Model Optimizations
|
| 34 |
|
|
@@ -107,11 +107,11 @@ lm_eval \
|
|
| 107 |
<tr>
|
| 108 |
<td>ARC Challenge (25-shot)
|
| 109 |
</td>
|
| 110 |
-
<td>
|
| 111 |
</td>
|
| 112 |
-
<td>
|
| 113 |
</td>
|
| 114 |
-
<td>99.
|
| 115 |
</td>
|
| 116 |
</tr>
|
| 117 |
<tr>
|
|
|
|
| 28 |
- **Model Developers:** Neural Magic
|
| 29 |
|
| 30 |
Quantized version of [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
|
| 31 |
+
It achieves an average score of 70.70 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 70.92.
|
| 32 |
|
| 33 |
### Model Optimizations
|
| 34 |
|
|
|
|
| 107 |
<tr>
|
| 108 |
<td>ARC Challenge (25-shot)
|
| 109 |
</td>
|
| 110 |
+
<td>59.39
|
| 111 |
</td>
|
| 112 |
+
<td>59.13
|
| 113 |
</td>
|
| 114 |
+
<td>99.6%
|
| 115 |
</td>
|
| 116 |
</tr>
|
| 117 |
<tr>
|