Model Description

This Memory Decoder model is trained on the Law domain and can be adapted to enhance any model in the Llama3, Llama3.1, and Llama3.2 families.

These Llama models are initialized from Qwen models with the embedding layer adapted to fit the Llama tokenizer. This enables efficient cross-model family knowledge transfer.

Paper: Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models

GitHub: https://github.com/LUMIA-Group/MemoryDecoder

Training & Evaluation Data

Law Domain Dataset: AsyLex

Test Split: MemoryDecoder-domain-data

Performance Results

Llama3 Family

Model Base Model Base + MemDec
Llama3-8B 5.96 4.46
Llama3-70B 4.90 4.07

Llama3.1 Family

Model Base Model Base + MemDec
Llama3.1-8B 5.88 4.42
Llama3.1-70B 4.89 4.06

Llama3.2 Family

Model Base Model Base + MemDec
Llama3.2-1B 8.23 5.11
Llama3.2-3B 6.83 4.76

Perplexity scores on Law domain test set. Lower is better.

Citation

@article{cao2025memory,
  title={Memory decoder: A pretrained, plug-and-play memory for large language models},
  author={Cao, Jiaqi and Wang, Jiarui and Wei, Rubin and Guo, Qipeng and Chen, Kai and Zhou, Bowen and Lin, Zhouhan},
  journal={arXiv preprint arXiv:2508.09874},
  year={2025}
}

Contact

For questions and support: [email protected]

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Clover-Hill/MemoryDecoder-Llama-law

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(486)
this model

Collection including Clover-Hill/MemoryDecoder-Llama-law