---
library_name: transformers
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
---

# Luth-1.7B-Instruct
**Luth-1.7B-Instruct** is a French fine-tuned version of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.
Our Evaluation, training and data scripts are available on [GitHub](https://github.com/kurakurai/Luth), along with the [Blog](https://huggingface.co/blog/MaxLSB/luth) we wrote.

## Model Details
Luth was trained using full fine-tuning on the Luth-SFT dataset with [Axolotl](https://github.com/axolotl-ai-cloud/axolotl). The resulting model was then merged with the base Qwen3-1.7B model. This process successfully retained the model's English capabilities while improving its performance on most selected benchmarks in both French and English.
## Benchmark Results
We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a `temperature=0`.
### French Benchmark Scores
| Model | IFEval
French | GPQA-Diamond
French | MMLU
French | Math500
French | Arc-Challenge
French | Hellaswag
French |
|------------------------|-----------------|-----------------------|----------------|-----------------|------------------------|-------------------|
| **Luth-1.7B-Instruct** | 58.53 | 36.55 | 49.75 | 62.60 | 35.16 | 31.88 |
| Qwen3-1.7B | 54.71 | 31.98 | 28.49 | 60.40 | 33.28 | 24.86 |
| SmolLM2-1.7B-Instruct | 30.93 | 20.30 | 33.73 | 10.20 | 28.57 | 49.58 |
| Qwen2.5-1.5B-Instruct | 31.30 | 27.41 | 46.25 | 33.20 | 32.68 | 34.33 |
| LFM2-1.2B | 54.41 | 22.84 | 47.59 | 36.80 | 39.44 | 33.05 |
### English Benchmark Scores
| Model | IFEval
English | GPQA-Diamond
English | MMLU
English | Math500
English | Arc-Challenge
English | Hellaswag
English |
|------------------------|-----------------|------------------------|----------------|------------------|-------------------------|--------------------|
| **Luth-1.7B-Instruct** | 65.80 | 29.80 | 60.28 | 70.40 | 42.24 | 58.53 |
| Qwen3-1.7B | 68.88 | 31.82 | 52.82 | 71.20 | 36.18 | 46.98 |
| SmolLM2-1.7B-Instruct | 49.04 | 25.08 | 50.27 | 22.67 | 42.32 | 66.94 |
| Qwen2.5-1.5B-Instruct | 39.99 | 25.76 | 59.81 | 57.20 | 41.04 | 64.48 |
| LFM2-1.2B | 68.52 | 24.24 | 55.22 | 45.80 | 42.58 | 57.61 |
## Code Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-1.7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-1.7B-Instruct")
messages = [
{"role": "user", "content": "Quelle est la capitale de la France?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(
tokenizer.decode(
outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True
)
)
```
## Citation
```bibtex
@misc{luth2025kurakurai,
title = {Luth: Efficient French Specialization for Small Language Models and Cross-Lingual Transfer},
author = {Lasbordes, Maxence and Gad, Sinoué},
year = {2025},
howpublished = {\url{https://arxiv.org/abs/2510.05846}},
note = {arXiv:2510.05846}
}
```