Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -10,15 +10,15 @@ datasets:
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Recurrent-TinyLlama-3T-train-recurrence-16
|
| 13 |
-
Recurrent-TinyLlama-3T-train-recurrence-16 is part of the [Retrofitting Recurrence](https://hf.co/collections/tomg-group-umd/retrofitting-recurrence) set of models. A set of depth recurrent models trained by taking layers from pretrained feedforward language models.
|
| 14 |
|
| 15 |
## Downloading and Using the Model
|
| 16 |
Load the model like this:
|
| 17 |
```python
|
| 18 |
import torch
|
| 19 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 20 |
-
model = AutoModelForCausalLM.from_pretrained("
|
| 21 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
| 22 |
```
|
| 23 |
### Modifying the Model's Depth at Test Time:
|
| 24 |
By providing the argument `num_steps`, the model will execute a forward pass with that amount of compute:
|
|
@@ -33,7 +33,7 @@ model(input_ids, num_steps=32)
|
|
| 33 |
The modelling file is based on [tomg-group-umd/huginn-0125](https://huggingface.co/tomg-group-umd/huginn-0125), and therefore inherits all of the cool recurrent features demonstrated in the README.md for that model.
|
| 34 |
|
| 35 |
## Training
|
| 36 |
-
We train using https://github.com/mcleish7/
|
| 37 |
|
| 38 |
## Data
|
| 39 |
Train and validation data is taken from non-overlapping subsets of raw text data. As such it is _not_ an instruction model.
|
|
@@ -48,8 +48,8 @@ Please, feel free to contact us with any questions, or open a discussion thread.
|
|
| 48 |
```
|
| 49 |
@article{mcleish2025teaching,
|
| 50 |
title={Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence},
|
| 51 |
-
author={Sean McLeish and Ang Li and John Kirchenbauer and Dayal Singh Kalra and Brian R. Bartoldson and Bhavya Kailkhura and Avi Schwarzschild and Jonas Geiping and
|
| 52 |
-
journal={arXiv preprint arXiv:},
|
| 53 |
year={2025}
|
| 54 |
}
|
| 55 |
```
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Recurrent-TinyLlama-3T-train-recurrence-16
|
| 13 |
+
Recurrent-TinyLlama-3T-train-recurrence-16 is part of the [Retrofitting Recurrence](https://hf.co/collections/tomg-group-umd/retrofitting-recurrence) set of models. A set of depth recurrent models trained by taking layers from pretrained feedforward language models ([link to paper](https://arxiv.org/abs/2511.07384)).
|
| 14 |
|
| 15 |
## Downloading and Using the Model
|
| 16 |
Load the model like this:
|
| 17 |
```python
|
| 18 |
import torch
|
| 19 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 20 |
+
model = AutoModelForCausalLM.from_pretrained("smcleish/Recurrent-TinyLlama-3T-train-recurrence-16", torch_dtype=torch.float32, trust_remote_code=True)
|
| 21 |
+
tokenizer = AutoTokenizer.from_pretrained("smcleish/Recurrent-TinyLlama-3T-train-recurrence-16")
|
| 22 |
```
|
| 23 |
### Modifying the Model's Depth at Test Time:
|
| 24 |
By providing the argument `num_steps`, the model will execute a forward pass with that amount of compute:
|
|
|
|
| 33 |
The modelling file is based on [tomg-group-umd/huginn-0125](https://huggingface.co/tomg-group-umd/huginn-0125), and therefore inherits all of the cool recurrent features demonstrated in the README.md for that model.
|
| 34 |
|
| 35 |
## Training
|
| 36 |
+
We train using https://github.com/mcleish7/retrofitting-recurrence using AMD MI300A GPUs on [Tuolumne](https://hpc.llnl.gov/hardware/compute-platforms/tuolumne) at Lawrence Livermore National Laboratory.
|
| 37 |
|
| 38 |
## Data
|
| 39 |
Train and validation data is taken from non-overlapping subsets of raw text data. As such it is _not_ an instruction model.
|
|
|
|
| 48 |
```
|
| 49 |
@article{mcleish2025teaching,
|
| 50 |
title={Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence},
|
| 51 |
+
author={Sean McLeish and Ang Li and John Kirchenbauer and Dayal Singh Kalra and Brian R. Bartoldson and Bhavya Kailkhura and Avi Schwarzschild and Jonas Geiping and Tom Goldstein and Micah Goldblum},
|
| 52 |
+
journal={arXiv preprint arXiv:2511.07384},
|
| 53 |
year={2025}
|
| 54 |
}
|
| 55 |
```
|