Fine-zhtw is a Traditional Chinese (zh-TW) collection inspired by Hugging Faceβs Fine series, built with mostly self-designed methods.
AI & ML interests
Traditional Chinese language models
Recent Activity
A collection of Formosa-1 (F1) reasoning models and datasets focused on Traditional Chinese instruction-following and logic.
-
twinkle-ai/Llama-3.2-3B-F1-Instruct
Text Generation β’ 4B β’ Updated β’ 3.41k β’ 23 -
twinkle-ai/Llama-3.2-3B-F1-Reasoning-Instruct
Text Generation β’ 4B β’ Updated β’ 452 β’ 46 -
twinkle-ai/Llama-3.2-3B-F1-Reasoning-Instruct-GGUF
Text Generation β’ 4B β’ Updated β’ 43 β’ 12 -
lianghsun/Llama-3.2-3B-F1-Base
Text Generation β’ 4B β’ Updated β’ 1
Instruction-tuned Gemma-3 models optimized for agentic workflows in Traditional Chinese.
A curated collection of datasets designed to evaluate and train reasoning capabilities in Traditional Chinese across various domains.
Benchmark log generated with Twinkle Eval, recording the model's outputs for each prompt.
Fine-zhtw is a Traditional Chinese (zh-TW) collection inspired by Hugging Faceβs Fine series, built with mostly self-designed methods.
Instruction-tuned Gemma-3 models optimized for agentic workflows in Traditional Chinese.
A collection of Formosa-1 (F1) reasoning models and datasets focused on Traditional Chinese instruction-following and logic.
-
twinkle-ai/Llama-3.2-3B-F1-Instruct
Text Generation β’ 4B β’ Updated β’ 3.41k β’ 23 -
twinkle-ai/Llama-3.2-3B-F1-Reasoning-Instruct
Text Generation β’ 4B β’ Updated β’ 452 β’ 46 -
twinkle-ai/Llama-3.2-3B-F1-Reasoning-Instruct-GGUF
Text Generation β’ 4B β’ Updated β’ 43 β’ 12 -
lianghsun/Llama-3.2-3B-F1-Base
Text Generation β’ 4B β’ Updated β’ 1
A curated collection of datasets designed to evaluate and train reasoning capabilities in Traditional Chinese across various domains.
Benchmark log generated with Twinkle Eval, recording the model's outputs for each prompt.