Tri Series
					Collection
				
Introducing our new series of models: Tri-7B, Tri-21B, and Tri-70B-preview-SFT
					• 
				10 items
				• 
				Updated
					
				•
					
					8
Tri-0.5B-Base is a ~500M parameter multilingual language model trained as an early experimental run before the Tri-7B training.
The model covers English, Korean, Japanese, and Chinese, with additional exposure to programming languages and mathematical reasoning. Pretrained on ~1.26 trillion tokens, it serves as a lightweight base model for research, fine-tuning, and open-source community use - especially for advancing Korean LLM development.
from transformers import AutoTokenizer, AutoModelForCausalLM
name = "trillionlabs/Tri-0.5B-Base"
tok = AutoTokenizer.from_pretrained(name)
model = AutoModelForCausalLM.from_pretrained(
    name,
    torch_dtype="bfloat16",
    device_map="auto"
)
prompt = "Write a short paragraph about Hangul."
x = tok(prompt, return_tensors="pt").to(model.device)
y = model.generate(
    **x,
    max_new_tokens=128,
    do_sample=True,
    temperature=0.8,
    top_p=0.95
)
print(tok.decode(y[0], skip_special_tokens=True))
This model is released under the Apache 2.0 License. See LICENSE for details.
If you use this model, please cite it as:
@misc{trillionlabs_tri05b_base_2025,
  title  = {Tri-0.5B-Base},
  author = {Trillion Labs},
  year   = {2025},
  note   = {https://huggingface.co/trillionlabs/Tri-0.5B-Base}
}