SONAR-LLM (39M)

We present SONAR-LLM, a decoder-only transformer that "thinks" in the same continuous SONAR embedding space, yet is supervised through token-level cross-entropy propagated via the frozen SONAR decoder. This hybrid objective retains the semantic abstraction of LCM while eliminating its diffusion sampler and restoring a likelihood-based training signal. Across model sizes from 39M to 1.3B parameters, SONAR-LLM attains competitive generation quality.

Original repository: FusionBrainLab/SONAR-LLM

Paper: arXiv:2508.05305

Minimal bundle with SONAR-LLM 39M checkpoint and code.

Install

  • Use a fresh venv/conda
  • Install SONAR from the official repo: facebookresearch/SONAR
  • Ensure PyTorch and transformers are installed
  • (Optional) Download NLTK punkt: python -c "import nltk; nltk.download('punkt')"

Usage

from huggingface_hub import snapshot_download
import sys
p = snapshot_download("raxtemur/sonar-llm-39m")
sys.path.insert(0, p)

from sonarllm_model import SONARLLMGenerator, SONARLLMGenerationConfig

gen = SONARLLMGenerator.load_from_checkpoint(p)
eos_emb = gen.t2vec.predict(["End of sequence."], source_lang="eng_Latn").to(gen.device)
cfg = SONARLLMGenerationConfig(temperature=0)
print(gen.generate("Once upon a time", eos_emb, cfg))

Files

  • pytorch_model.bin
  • config.json
  • sonarllm_model/

Notes

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including raxtemur/sonar-llm-39m