MemoryDecoder
Collection
8 items
โข
Updated
This Memory Decoder model is trained on the Law domain and can be adapted to enhance any model in the Llama3, Llama3.1, and Llama3.2 families.
These Llama models are initialized from Qwen models with the embedding layer adapted to fit the Llama tokenizer. This enables efficient cross-model family knowledge transfer.
Paper: Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models
GitHub: https://github.com/LUMIA-Group/MemoryDecoder
Law Domain Dataset: AsyLex
Test Split: MemoryDecoder-domain-data
| Model | Base Model | Base + MemDec |
|---|---|---|
| Llama3-8B | 5.96 | 4.46 |
| Llama3-70B | 4.90 | 4.07 |
| Model | Base Model | Base + MemDec |
|---|---|---|
| Llama3.1-8B | 5.88 | 4.42 |
| Llama3.1-70B | 4.89 | 4.06 |
| Model | Base Model | Base + MemDec |
|---|---|---|
| Llama3.2-1B | 8.23 | 5.11 |
| Llama3.2-3B | 6.83 | 4.76 |
Perplexity scores on Law domain test set. Lower is better.
@article{cao2025memory,
title={Memory decoder: A pretrained, plug-and-play memory for large language models},
author={Cao, Jiaqi and Wang, Jiarui and Wei, Rubin and Guo, Qipeng and Chen, Kai and Zhou, Bowen and Lin, Zhouhan},
journal={arXiv preprint arXiv:2508.09874},
year={2025}
}
For questions and support: [email protected]
Base model
Qwen/Qwen2.5-0.5B