Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ Please refer to [Quantization-Aware Training (QAT)](https://github.com/NVIDIA/Te
|
|
| 22 |
for fine-tuning and quantization([huihui-ai/Huihui-gpt-oss-20b-mxfp4-abliterated-v2](https://huggingface.co/huihui-ai/Huihui-gpt-oss-20b-mxfp4-abliterated-v2)).
|
| 23 |
|
| 24 |
## Dataset
|
| 25 |
-
Using huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated to generate a dataset for harmful instructions
|
| 26 |
|
| 27 |
**Advantages**: All core metrics (Loss/Acc/Entropy) improve synchronously, with a small gap between Eval and Train (<0.01), indicating strong generalization ability. Fine-tuning shows effect in just 400 steps, with high efficiency.
|
| 28 |
|
|
|
|
| 22 |
for fine-tuning and quantization([huihui-ai/Huihui-gpt-oss-20b-mxfp4-abliterated-v2](https://huggingface.co/huihui-ai/Huihui-gpt-oss-20b-mxfp4-abliterated-v2)).
|
| 23 |
|
| 24 |
## Dataset
|
| 25 |
+
Using huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated to generate a dataset for harmful instructions.
|
| 26 |
|
| 27 |
**Advantages**: All core metrics (Loss/Acc/Entropy) improve synchronously, with a small gap between Eval and Train (<0.01), indicating strong generalization ability. Fine-tuning shows effect in just 400 steps, with high efficiency.
|
| 28 |
|