File size: 4,101 Bytes
6ee421d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
license: apache-2.0
language:
- ko
- en
tags:
- korean
- reasoning
- instruction-tuning
- fine-tuning
- trillion
- llama
- sft
---
# π§ Trillion-7B-preview-Ko-Reasoning
> A large-scale Korean reasoning model fine-tuned from **trillionlabs/Trillion-7B-preview**, designed to excel in logical and multi-hop reasoning tasks in Korean.
---
## π Overview
**Trillion-7B-preview-Ko-Reasoning** is a fine-tuned version of [trillionlabs/Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore:
- The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models**
- The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants**
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.
---
## π§ͺ Benchmark Results
> - π All benchmarks were measured using the **0-shot CoT (Chain-of-Thought)** method.
> - π The **Score** represents either the **accuracy (%)** of correct answers or a rating on a **1-10 scale** from a judge model.
> - π **LLM-as-a-judge** benchmarks were evaluated using **GPT-4o (2024-08-01-preview)**.
| **Benchmark** | **Score** |
|------------------|---------------|
| GPQA diamond | 56.2 |
| GSM8K | 53.1 |
| HAERAE | 73.7 |
| KSM | 57.8 |
| LogicKor | 8.40 |
| Math500 | 72.8 |
| MT-Bench | 7.90 |
| MT-Bench(Ko) | 7.87 |
---
## π§βπ» Usage
Install Transformers >= 4.50:
```bash
pip install -U transformers
```
Basic example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DimensionSTP/Trillion-7B-preview-Ko-Reasoning"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "μμΈκ³Ό λΆμ° μ€ μ΄λκ° λ 컀?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## π§ Base Model: trillionlabs/Trillion-7B-preview
The base model, [trillionlabs/Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview), is a LLM developed by the Trillion Labs.
For more technical details, refer to the [Trillion 7B Technical Report](https://arxiv.org/pdf/2504.15431).
---
## π§± Model Architecture
| Property | Value |
|------------------|------------------------|
| Architecture | LlamaForCausalLM |
| Parameters | 7B |
| Context Length | 4,096 tokens |
| Tokenizer | LlamaTokenizer (BPE) |
---
## π
Release Date
**Mar 2025**
This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.
---
## π¬ Contact
For questions, collaborations, or deployment inquiries, please contact:
- π€ Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP)
- βοΈ Email: [[email protected]]
---
## π¦ Available Checkpoints
- β
`main`: Final stable version from the `last` branch
- β
All training artifacts available (tokenizer, config, model weights)
|