--- language: - en tags: - sonar-llm - sonar - llama - text-generation - embeddings license: apache-2.0 library_name: transformers pipeline_tag: text-generation --- # SONAR-LLM (1.3B) -- Text summarization checkopoint We present SONAR-LLM, a decoder-only transformer that "thinks" in the same continuous SONAR embedding space, yet is supervised through token-level cross-entropy propagated via the frozen SONAR decoder. This hybrid objective retains the semantic abstraction of LCM while eliminating its diffusion sampler and restoring a likelihood-based training signal. Across model sizes from 39M to 1.3B parameters, SONAR-LLM attains competitive generation quality. Original repository: [FusionBrainLab/SONAR-LLM](https://github.com/FusionBrainLab/SONAR-LLM) Paper: [arXiv:2508.05305](https://arxiv.org/abs/2508.05305) Minimal bundle with SONAR-LLM 1.3B checkpoint and code. ## Install - Use a fresh venv/conda - Install SONAR from the official repo: [facebookresearch/SONAR](https://github.com/facebookresearch/SONAR) - Ensure PyTorch and transformers are installed - (Optional) Download NLTK punkt: `python -c "import nltk; nltk.download('punkt')"` ## Usage ```python from huggingface_hub import snapshot_download import sys p = snapshot_download("raxtemur/sonar-llm-300m") sys.path.insert(0, p) from sonarllm_model import SONARLLMGenerator, SONARLLMGenerationConfig gen = SONARLLMGenerator.load_from_checkpoint(p) eos_emb = gen.t2vec.predict(["End of sequence."], source_lang="eng_Latn").to(gen.device) cfg = SONARLLMGenerationConfig(temperature=0) print(gen.generate("Petya loves Masha. Masha loves Gosha. Gosha loves Petya. Text summarization in one sentence only.", eos_emb, cfg)) ``` ## Files - `pytorch_model.bin` - `config.json` - `sonarllm_model/` ## Notes - SONAR install guide: [facebookresearch/SONAR](https://github.com/facebookresearch/SONAR) - Tokenizer name is taken from `config.json`.