Update README.md
Browse files
README.md
CHANGED
|
@@ -47,7 +47,7 @@ This document presents the evaluation results of `DeepSeek-LLM-67B-Chat`, a **8-
|
|
| 47 |
|
| 48 |
## ⚙️ Model Configuration
|
| 49 |
|
| 50 |
-
- **Model:** `DeepSeek-
|
| 51 |
- **Parameters:** `67 billion`
|
| 52 |
- **Quantization:** `8-bit GPTQ`
|
| 53 |
- **Source:** Hugging Face (`hf`)
|
|
@@ -67,7 +67,6 @@ This document presents the evaluation results of `DeepSeek-LLM-67B-Chat`, a **8-
|
|
| 67 |
|
| 68 |
## 📈 Performance Insights
|
| 69 |
|
| 70 |
-
- The `"higher_is_better"` flag confirms that **higher accuracy is preferred**.
|
| 71 |
- **Quantization Impact:** The **8-bit GPTQ quantization** reduces memory usage but may also impact accuracy slightly.
|
| 72 |
- **Zero-shot Limitation:** Performance could improve with **few-shot prompting** (providing examples before testing).
|
| 73 |
|
|
|
|
| 47 |
|
| 48 |
## ⚙️ Model Configuration
|
| 49 |
|
| 50 |
+
- **Model:** `DeepSeek-LLM-67B-Chat`
|
| 51 |
- **Parameters:** `67 billion`
|
| 52 |
- **Quantization:** `8-bit GPTQ`
|
| 53 |
- **Source:** Hugging Face (`hf`)
|
|
|
|
| 67 |
|
| 68 |
## 📈 Performance Insights
|
| 69 |
|
|
|
|
| 70 |
- **Quantization Impact:** The **8-bit GPTQ quantization** reduces memory usage but may also impact accuracy slightly.
|
| 71 |
- **Zero-shot Limitation:** Performance could improve with **few-shot prompting** (providing examples before testing).
|
| 72 |
|