Professor/kinyarwandatts
LoRA fine-tune of OuteAI/Llama-OuteTTS-1.0-1B for Kinyarwanda text-to-speech.
Model Details
- Developer: Professor
- Base model:
OuteAI/Llama-OuteTTS-1.0-1B - Languages: ['rw']
- License: apache-2.0
- Library: 🤗 Transformers / TRL / Unsloth
How to Use
Install instructions / weights coming soon.
Note: Exact audio synthesis depends on your TTS pipeline/vocoder.
Training Details
- Dataset:
mbazaNLP/kinyarwanda-tts-dataset(~4k pairs) - Trainer: TRL SFTTrainer (Unsloth)
- Hyperparameters:
{
"note": "See trainer_state.json in training_artifacts."
}
- Hardware:
{
"machine": "x86_64",
"system": "Linux",
"python": "3.12.11",
"gpu_name": "Tesla T4",
"gpu_count": "1",
"gpu_capability": "(7, 5)",
"cuda": "12.6",
"bf16_support": "False"
}
- Logs & checkpoints: see
training_artifacts/(TensorBoard events &trainer_state.json).
Evaluation
- Track
eval_lossin TensorBoard. Add MOS or intelligibility metrics if available.
Limitations & Risks
- Kinyarwanda-only; may mispronounce OOV words or names.
Citation
@misc{Professor_kinyarwandatts,
author = {Olufemi Victor Tolulope},
title = {Professor/kinyarwandatts: Kinyarwanda TTS LoRA},
year = {2025},
url = {https://huggingface.co/Professor/kinyarwandatts}
}
Model tree for Professor/kinyarwandatts
Base model
OuteAI/Llama-OuteTTS-1.0-1B