File size: 4,624 Bytes
f5e3aca
 
6fe8a44
 
 
 
 
 
 
 
 
f5e3aca
 
6fe8a44
f5e3aca
 
6fe8a44
f5e3aca
6fe8a44
f5e3aca
26d1ae9
0de4349
dd17218
 
f5e3aca
 
0de4349
f5e3aca
6fe8a44
f5e3aca
0de4349
 
4116090
 
7270c49
 
 
 
 
 
 
4116090
 
 
7270c49
 
 
 
 
 
 
4116090
c943a29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6fe8a44
f5e3aca
6fe8a44
 
b4110c5
 
 
 
 
6fe8a44
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
library_name: transformers
license: apache-2.0
datasets:
- kurakurai/luth-sft
language:
- fr
- en
base_model:
- Qwen/Qwen3-1.7B
pipeline_tag: text-generation
---

![Kurakura AI Logo](media/logo_kurakura.png)


# Luth-1.7B-Instruct

**Luth-1.7B-Instruct** is a French fine-tuned version of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.

Our Evaluation, training and data scripts are available on [GitHub](https://github.com/kurakurai/Luth), along with the [Blog](https://huggingface.co/blog/MaxLSB/luth) we wrote.

![Luth Graph](media/luth-graph.png)

## Model Details

Luth was trained using full fine-tuning on the Luth-SFT dataset with [Axolotl](https://github.com/axolotl-ai-cloud/axolotl). The resulting model was then merged with the base Qwen3-1.7B model. This process successfully retained the model's English capabilities while improving its performance on most selected benchmarks in both French and English.

## Benchmark Results

We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a `temperature=0`.

### French Benchmark Scores

| Model                  | IFEval<br>French | GPQA-Diamond<br>French | MMLU<br>French | Math500<br>French | Arc-Challenge<br>French | Hellaswag<br>French |
|------------------------|-----------------|-----------------------|----------------|-----------------|------------------------|-------------------|
| **Luth-1.7B-Instruct** | <u>58.53</u>       | <u>36.55</u>             | <u>49.75</u>      | <u>62.60</u>       | 35.16                  | 31.88             |
| Qwen3-1.7B             | 54.71           | 31.98                 | 28.49          | 60.40           | 33.28                  | 24.86             |
| SmolLM2-1.7B-Instruct  | 30.93           | 20.30                 | 33.73          | 10.20           | 28.57                  | <u>49.58</u>         |
| Qwen2.5-1.5B-Instruct  | 31.30           | 27.41                 | 46.25          | 33.20           | 32.68                  | 34.33             |
| LFM2-1.2B              | 54.41           | 22.84                 | 47.59          | 36.80           | <u>39.44</u>              | 33.05             |

### English Benchmark Scores

| Model                  | IFEval<br>English | GPQA-Diamond<br>English | MMLU<br>English | Math500<br>English | Arc-Challenge<br>English | Hellaswag<br>English |
|------------------------|-----------------|------------------------|----------------|------------------|-------------------------|--------------------|
| **Luth-1.7B-Instruct** | 65.80           | 29.80                  | <u>60.28</u>      | 70.40            | 42.24                   | 58.53              |
| Qwen3-1.7B             | <u>68.88</u>       | <u>31.82</u>              | 52.82          | <u>71.20</u>        | 36.18                   | 46.98              |
| SmolLM2-1.7B-Instruct  | 49.04           | 25.08                  | 50.27          | 22.67            | 42.32                   | <u>66.94</u>          |
| Qwen2.5-1.5B-Instruct  | 39.99           | 25.76                  | 59.81          | 57.20            | 41.04                   | 64.48            |
| LFM2-1.2B              | 68.52           | 24.24                  | 55.22          | 45.80            | <u>42.58</u>               | 57.61              |

## Code Example

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-1.7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-1.7B-Instruct")
messages = [
    {"role": "user", "content": "Quelle est la capitale de la France?"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=100)
print(
    tokenizer.decode(
        outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True
    )
)
```

## Citation

```bibtex
@misc{luth2025kurakurai,
  title        = {Luth: Efficient French Specialization for Small Language Models and Cross-Lingual Transfer},
  author       = {Lasbordes, Maxence and Gad, Sinoué},
  year         = {2025},
  howpublished = {\url{https://arxiv.org/abs/2510.05846}},
  note         = {arXiv:2510.05846}
}
```