SONAR-LLM
Collection
5 items
โข
Updated
We present SONAR-LLM, a decoder-only transformer that "thinks" in the same continuous SONAR embedding space, yet is supervised through token-level cross-entropy propagated via the frozen SONAR decoder. This hybrid objective retains the semantic abstraction of LCM while eliminating its diffusion sampler and restoring a likelihood-based training signal. Across model sizes from 39M to 1.3B parameters, SONAR-LLM attains competitive generation quality.
Original repository: FusionBrainLab/SONAR-LLM
Paper: arXiv:2508.05305
Minimal bundle with SONAR-LLM 39M checkpoint and code.
python -c "import nltk; nltk.download('punkt')"from huggingface_hub import snapshot_download
import sys
p = snapshot_download("raxtemur/sonar-llm-39m")
sys.path.insert(0, p)
from sonarllm_model import SONARLLMGenerator, SONARLLMGenerationConfig
gen = SONARLLMGenerator.load_from_checkpoint(p)
eos_emb = gen.t2vec.predict(["End of sequence."], source_lang="eng_Latn").to(gen.device)
cfg = SONARLLMGenerationConfig(temperature=0)
print(gen.generate("Once upon a time", eos_emb, cfg))
pytorch_model.binconfig.jsonsonarllm_model/config.json.