File size: 2,372 Bytes
a807818
 
 
862cabd
 
 
 
 
 
 
 
a807818
 
862cabd
a807818
862cabd
a807818
862cabd
a807818
862cabd
a807818
862cabd
 
 
 
 
a807818
862cabd
a807818
862cabd
a807818
862cabd
a807818
862cabd
 
 
 
 
a6338b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
tags:
  - chess
  - tinyllama
  - lora
  - json
  - alpaca-format
  - ai-tournament
  - aura
---

# β™ŸοΈ Konvah's Chess TinyLlama

This model is a fine-tuned version of [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using LoRA for the **Aura Chess AI Tournament**. It predicts high-quality chess moves in JSON format, given a move history, color, and a list of legal moves.

---

## 🧠 Model Objective

The model learns to:
- Choose the best legal move (`move`)
- Give a short explanation (`reasoning`) in ≀10 words
- Format responses as valid JSON
- Respond in `[INST] ... [/INST]` format

---

## πŸ’‘ Input Format

The model uses structured prompts:

```json
[INST]
You are a chess player.
{"moveHistory": ["e4", "e5", "Nf3"], "possibleMoves": ["Nc3", "Bc4", "d4"], "color": "w"}
[/INST]

🎯 Output Format
Always a single-line JSON:

json
Copy
Edit
{"move": "Bc4", "reasoning": "Develops bishop and targets f7"}
The move must be from possibleMoves

The reasoning is free-form but short

πŸ› οΈ Training Details
Base: TinyLlama-1.1B-Chat

LoRA (8-bit): q_proj, k_proj, v_proj, o_proj

Epochs: 3

Dataset: ~70 samples from master-level PGNs

Format: instruction-style using transformers.Trainer

πŸ“ˆ Performance
| Metric      | Value |
| ----------- | ----- |
| Final loss  | 1.08  |
| Epochs      | 3     |
| Batch size  | 1     |
| Total steps | 51    |

πŸš€ Usage
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Konvah/chess-tinyllama")
tokenizer = AutoTokenizer.from_pretrained("Konvah/chess-tinyllama")

prompt = """[INST]
You are a chess player.
{"moveHistory": ["e4", "e5", "Nf3"], "possibleMoves": ["Nc3", "Bc4", "d4"], "color": "w"}
[/INST]"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“Ž License
Open for research and tournament evaluation. Not intended for production without additional safety testing.

✍️ Author
Ismail Abubakar (@boringcrypto_)

Contact: [email protected]

πŸ† Aura Tournament
This model was created for the Aura Chess LLM Tournament to demonstrate reasoning and strategy prediction using open-source LLMs.

---