Logo

KaniTTS EXPO2025 Osaka japanese

License

A high-speed, high-fidelity Text-to-Speech model optimized for real-time conversational AI applications.


『いのち輝く未来社会のデザイン』という大阪・関西万博2025のテーマを祝し、キルギスの人々から日本の皆さまへ --心と心 をつなぐ贈り物として、どうぞお受け取り ください。


In honor of Expo Osaka 2025 and its motto 'Designing Future Society for Our Lives,' we humbly present this gift from the people of the Kyrgyz Republic to the people of Japan - heart to heart.


Overview

KaniTTS uses a two-stage pipeline combining a large language model with an efficient audio codec for exceptional speed and audio quality. The architecture generates compressed token representations through a backbone LLM, then rapidly synthesizes waveforms via neural audio codec, achieving extremely low latency.

Key Specifications:

  • Model Size: 370M parameters
  • Sample Rate: 22kHz
  • Languages: Japanese
  • License: Apache 2.0

Performance

Nvidia RTX 5090 Benchmarks:

  • Latency: ~1 second to generate 15 seconds of audio
  • Memory: 2GB GPU VRAM
  • Quality Metrics: MOS 4.3/5 (naturalness), WER <5% (accuracy)

Pretraining:

  • Dataset: ~80k hours from LibriTTS, Common Voice, and Emilia
  • Hardware: 8x H100 GPUs, 45 hours training time on Lambda AI

Voices Datasets

Audio Examples

Text Audio
こんにちは!カニと申します。私はボイスモデルです!何についてお話ししましょうか?
2025年の大阪・関西万博は素晴らしいイベントでした。
「いのち輝く未来社会のデザイン」というテーマが多くの人の心に残りました。
世界中の国々が未来の技術を紹介しました。
小さな一歩でも、前に進めば景色が変わります。
何気ない日常の中にも、心が温まる瞬間があります。

Use Cases

  • Conversational AI: Real-time speech for chatbots and virtual assistants
  • Edge/Server Deployment: Resource-efficient inference on affordable hardware
  • Accessibility: Screen readers and language learning applications
  • Research: Fine-tuning for specific voices, accents, or emotions

Limitations

  • Performance degrades with inputs exceeding 2000 tokens
  • Limited expressivity without fine-tuning for specific emotions
  • May inherit biases from training data in prosody or pronunciation

Optimization Tips

  • Multilingual Performance: Continually pretrain on target language datasets and fine-tune NanoCodec
  • Batch Processing: Use batches of 8-16 for high-throughput scenarios
  • Hardware: Optimized for NVIDIA Blackwell architecture GPUs

Resources

Models:

Examples:

Links:

Acknowledgments

Built on top of LiquidAI LFM2 350M as the backbone and Nvidia NanoCodec for audio processing.

Responsible Use

Prohibited activities include:

  • Illegal content or harmful, threatening, defamatory, or obscene material
  • Hate speech, harassment, or incitement of violence
  • Generating false or misleading information
  • Impersonating individuals without consent
  • Malicious activities such as spamming, phishing, or fraud

By using this model, you agree to comply with these restrictions and all applicable laws.

Contact

Have a question, feedback, or need support? Please fill out our contact form and we'll get back to you as soon as possible.

Downloads last month
50
Safetensors
Model size
0.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nineninesix/kani-tts-370m-expo2025-osaka-ja

Finetuned
(4)
this model
Quantizations
1 model

Space using nineninesix/kani-tts-370m-expo2025-osaka-ja 1