A newer version of this model is available:
sthenno-com/miscii-14b-0218
QuantFactory/miscii-14b-1028-GGUF
This is quantized version of sthenno-com/miscii-14b-1028 created using llama.cpp
Original Model Card
miscii-14b-1028
Role-based Instructions
Just parse the following as your system prompt.
Note there is NO special-tokens here.
An example system prompt:
system_prompt: str = (
"""<|context_start|>personas<|context_sep|>
<|persona_start|>user<|persona_sep|>
{user_persona}<|persona_end|>
<|persona_start|>assistant<|persona_sep|>
{assistant_persona}<|persona_end|><|context_end|>""".format(
user_persona="""I am Miscii.
I am the designer of Sthenno.
[Optional: Additional statements]""",
assistant_persona="""I am Sthenno.
I speak in Chinese.
[Optional: Additional statements]""",
)
)
Training
See Report for miscii-1020 for more details.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|---|---|
| Avg. | 35.05 |
| IFEval (0-Shot) | 82.37 |
| BBH (3-Shot) | 49.26 |
| MATH Lvl 5 (4-Shot) | 6.34 |
| GPQA (0-shot) | 14.21 |
| MuSR (0-shot) | 12.00 |
| MMLU-PRO (5-shot) | 46.14 |
- Downloads last month
- 77
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantFactory/miscii-14b-1028-GGUF
Datasets used to train QuantFactory/miscii-14b-1028-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard82.370
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard49.260
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard6.340
- acc_norm on GPQA (0-shot)Open LLM Leaderboard14.210
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.000
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard46.140