cai-qi commited on
Commit
95f6058
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files
.gitattributes ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/test_3.jpg filter=lfs diff=lfs merge=lfs -text
37
+ assets/test.jpg filter=lfs diff=lfs merge=lfs -text
38
+ assets/demo.png filter=lfs diff=lfs merge=lfs -text
39
+ assets/framework.png filter=lfs diff=lfs merge=lfs -text
40
+ assets/demo.jpg filter=lfs diff=lfs merge=lfs -text
41
+ assets/framework.jpg filter=lfs diff=lfs merge=lfs -text
8B-1024.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7c9c90bc929f561e96a46c9d3906a921a597515bbfc8ca671a72b05bc56bc53
3
+ size 33504697164
8B-512.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e67f8b98cc99f39b7d0b4b4287c56f8bab8195821b398574e87df2236533f178
3
+ size 33504700402
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - image-editing
5
+ - HiDream.ai
6
+ language:
7
+ - en
8
+ pipeline_tag: any-to-any
9
+ base_model:
10
+ - FoundationVision/Infinity
11
+ ---
12
+ # VAREdit
13
+
14
+ ![VAREdit Demo](assets/demo.jpg)
15
+
16
+ [VAREdit](https://github.com/HiDream-ai/VAREdit) is an advanced image editing model built on the [Infinity](https://huggingface.co/FoundationVision/infinity) models, designed for high-quality instruction-based image editing.
17
+
18
+
19
+ Try our online demos: [🤗VAREdit-8B-1024](https://huggingface.co/spaces/HiDream-ai/VAREdit-8B-1024) and [🤗VAREdit-8B-512](https://huggingface.co/spaces/HiDream-ai/VAREdit-8B-512).
20
+
21
+ ## 🌟 Key Features
22
+
23
+ - **Strong Instruction Follow**: Follows instructions more accurately due to the autoregressive nature of the model.
24
+ - **Efficient Inference**: Optimized for fast generation with less than 1 seconds for 8B model.
25
+ - **Flexible Resolution**: Supports 512×512 and 1024×1024 image resolutions
26
+ ![VAREdit Demo](assets/framework.jpg)
27
+
28
+ ## 📊 Model Variants
29
+
30
+ | Model Variant | Resolutions | HuggingFace Model | Time (H800) | VRAM (GB) |
31
+ |------------------|--------------|----------------------------------------------------------------------------------|----------|-----------|
32
+ | VAREdit-8B-512 | 512×512 | [VAREdit-8B-512](https://huggingface.co/HiDream-ai/VAREdit) | ~0.7s | 50.41 |
33
+ | VAREdit-8B-1024 | 1024×1024 | [VAREdit-8B-1024](https://huggingface.co/HiDream-ai/VAREdit) | ~1.99s | 50.41 |
34
+
35
+ ## 🚀 Quick Start
36
+
37
+ ### Prerequisites
38
+
39
+ Before starting, ensure you have:
40
+ - Python 3.8+
41
+ - CUDA-compatible GPU with sufficient VRAM (8GB+ for 2B model, 24GB+ for 8B model)
42
+ - Required dependencies installed
43
+
44
+ ### Installation
45
+
46
+ 1. **Clone the repository**
47
+ ```bash
48
+ git clone https://github.com/HiDream-ai/VAREdit.git
49
+ cd VAREdit
50
+ ```
51
+
52
+ 2. **Install dependencies**
53
+ ```bash
54
+ pip install -r requirements.txt
55
+ ```
56
+
57
+ 3. **Download model checkpoints**
58
+
59
+ Download the VAREdit model checkpoints:
60
+ ```bash
61
+ # Download from HuggingFace
62
+ git lfs install
63
+ git clone https://huggingface.co/HiDream-ai/VAREdit
64
+ ```
65
+
66
+ ### Basic Usage
67
+
68
+ ```python
69
+ from infer import load_model, generate_image
70
+
71
+ model_components = load_model(
72
+ pretrain_root="HiDream-ai/VAREdit",
73
+ model_path="HiDream-ai/VAREdit/8B-1024.pth",
74
+ model_size="8B",
75
+ image_size=1024
76
+ )
77
+
78
+ # Generate edited image
79
+ edited_image = generate_image(
80
+ model_components,
81
+ src_img_path="assets/test.jpg",
82
+ instruction="Add glasses to this girl and change hair color to red",
83
+ cfg=3.0, # Classifier-free guidance scale
84
+ tau=0.1, # Temperature parameter
85
+ seed=42 # Optional random seed
86
+ )
87
+ ```
88
+
89
+ ## 📝 Detailed Configuration
90
+
91
+ ### Model Sampling Parameters
92
+
93
+ | Parameter | Description | Default |
94
+ |-----------|-------------|---------|
95
+ | `cfg` | Classifier-free guidance scale | 3.0 |
96
+ | `tau` | Temperature for sampling | 0.1 |
97
+ | `seed` | Random seed for reproducibility | -1 (random) |
98
+
99
+ ## 📂 Project Structure
100
+
101
+ ```
102
+ VAREdit/
103
+ ├── infer.py # Main inference script
104
+ ├── infinity/ # Core model implementations
105
+ │ ├── models/ # Model architectures
106
+ │ ├── dataset/ # Data processing utilities
107
+ │ └── utils/ # Helper functions
108
+ ├── tools/ # Additional tools and scripts
109
+ │ └── run_infinity.py # Model execution utilities
110
+ ├── assets/ # Demo images and resources
111
+ └── README.md # This file
112
+ ```
113
+
114
+ ## 📊 Performance Benchmarks
115
+ | **Method** | **Size** | **EMU-Edit Bal.** | **PIE-Bench Bal.** | **Time (A800)** |
116
+ |:---|:---:|:---:|:---:|:---:|
117
+ | InstructPix2Pix | 1.1B | 2.923 | 4.034 | 3.5s |
118
+ | UltraEdit | 7.7B | 4.541 | 5.580 | 2.6s |
119
+ | OmniGen | 3.8B | 4.674 | 3.492 | 16.5s |
120
+ | AnySD | 2.9B | 3.129 | 3.326 | 3.4s |
121
+ | EditAR | 0.8B | 3.305 | 4.707 | 45.5s |
122
+ | ACE++ | 16.9B | 2.076 | 2.574 | 5.7s |
123
+ | ICEdit | 17.0B | 4.785 | 4.933 | 8.4s |
124
+ | **VAREdit** (256px) | 2.2B | 5.565 | 6.684 | 0.5s |
125
+ | **VAREdit** (512px) | 2.2B | 5.662 | 6.996 | 0.7s |
126
+ | **VAREdit** (512px) | 8.4B | 7.792 | 8.105 | 1.2s |
127
+ | **VAREdit** (1024px) | 8.4B | 7.379 | 7.688 | 3.9s |
128
+
129
+ **Note**: The released 8B models are trained longer and on more data, so the performances are better than that in the paper.
130
+
131
+ ## 📄 License
132
+
133
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
134
+
135
+ ## 📚 Citation
136
+
137
+ If you use VAREdit in your research, please cite:
138
+
139
+ ```bibtex
140
+ @article{varedit2025,
141
+ title={Visual Autoregressive Modeling for Instruction-Guided Image Editing},
142
+ author={Mao, Qingyang and Cai, Qi and Li, Yehao and Pan, Yingwei and Cheng, Mingyue and Yao, Ting and Liu, Qi and Mei, Tao},
143
+ journal={arXiv preprint},
144
+ year={2025}
145
+ }
146
+ ```
147
+
148
+ ## 🙏 Acknowledgments
149
+
150
+ - Built on the [Infinity](https://huggingface.co/FoundationVision/infinity) models
151
+
152
+ **Note**: This project is under active development. Features and code may change.
assets/demo.jpg ADDED

Git LFS Details

  • SHA256: 47f346fb94792a3d8fa386d7f30819f0653607bf5a1dd5a82db19e1a9313a863
  • Pointer size: 131 Bytes
  • Size of remote file: 956 kB
assets/framework.jpg ADDED

Git LFS Details

  • SHA256: d2c2806cdd2b1ed120375fd32d696fb6fb21ace29cd919b23609f5cd4d37d3d8
  • Pointer size: 131 Bytes
  • Size of remote file: 267 kB
assets/keep ADDED
File without changes
assets/test.jpg ADDED

Git LFS Details

  • SHA256: 313485a23fe9574c8968717398520d2b0c061aee460b317b93c5cb9100395cdd
  • Pointer size: 131 Bytes
  • Size of remote file: 118 kB
assets/test_1.jpg ADDED
assets/test_3.jpg ADDED

Git LFS Details

  • SHA256: cf117f67ef5a8056eabc4adbc3174a043a0811925f39ba2d1cd7c081e8f6b1fc
  • Pointer size: 131 Bytes
  • Size of remote file: 332 kB
assets/test_4.jpg ADDED
flan-t5-xl/.gitattributes ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.npy filter=lfs diff=lfs merge=lfs -text
14
+ *.npz filter=lfs diff=lfs merge=lfs -text
15
+ *.onnx filter=lfs diff=lfs merge=lfs -text
16
+ *.ot filter=lfs diff=lfs merge=lfs -text
17
+ *.parquet filter=lfs diff=lfs merge=lfs -text
18
+ *.pb filter=lfs diff=lfs merge=lfs -text
19
+ *.pickle filter=lfs diff=lfs merge=lfs -text
20
+ *.pkl filter=lfs diff=lfs merge=lfs -text
21
+ *.pt filter=lfs diff=lfs merge=lfs -text
22
+ *.pth filter=lfs diff=lfs merge=lfs -text
23
+ *.rar filter=lfs diff=lfs merge=lfs -text
24
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
25
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
26
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
27
+ *.tflite filter=lfs diff=lfs merge=lfs -text
28
+ *.tgz filter=lfs diff=lfs merge=lfs -text
29
+ *.wasm filter=lfs diff=lfs merge=lfs -text
30
+ *.xz filter=lfs diff=lfs merge=lfs -text
31
+ *.zip filter=lfs diff=lfs merge=lfs -text
32
+ *.zst filter=lfs diff=lfs merge=lfs -text
33
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
flan-t5-xl/README.md ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - ro
6
+ - de
7
+ - multilingual
8
+
9
+ widget:
10
+ - text: "Translate to German: My name is Arthur"
11
+ example_title: "Translation"
12
+ - text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
13
+ example_title: "Question Answering"
14
+ - text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
15
+ example_title: "Logical reasoning"
16
+ - text: "Please answer the following question. What is the boiling point of Nitrogen?"
17
+ example_title: "Scientific knowledge"
18
+ - text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
19
+ example_title: "Yes/no question"
20
+ - text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
21
+ example_title: "Reasoning task"
22
+ - text: "Q: ( False or not False or False ) is? A: Let's think step by step"
23
+ example_title: "Boolean Expressions"
24
+ - text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
25
+ example_title: "Math reasoning"
26
+ - text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
27
+ example_title: "Premise and hypothesis"
28
+
29
+ tags:
30
+ - text2text-generation
31
+
32
+ datasets:
33
+ - svakulenk0/qrecc
34
+ - taskmaster2
35
+ - djaym7/wiki_dialog
36
+ - deepmind/code_contests
37
+ - lambada
38
+ - gsm8k
39
+ - aqua_rat
40
+ - esnli
41
+ - quasc
42
+ - qed
43
+
44
+
45
+ license: apache-2.0
46
+ ---
47
+
48
+ # Model Card for FLAN-T5 XL
49
+
50
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
51
+ alt="drawing" width="600"/>
52
+
53
+ # Table of Contents
54
+
55
+ 0. [TL;DR](#TL;DR)
56
+ 1. [Model Details](#model-details)
57
+ 2. [Usage](#usage)
58
+ 3. [Uses](#uses)
59
+ 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
60
+ 5. [Training Details](#training-details)
61
+ 6. [Evaluation](#evaluation)
62
+ 7. [Environmental Impact](#environmental-impact)
63
+ 8. [Citation](#citation)
64
+
65
+ # TL;DR
66
+
67
+ If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
68
+ As mentioned in the first few lines of the abstract :
69
+ > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
70
+
71
+ **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
72
+
73
+ # Model Details
74
+
75
+ ## Model Description
76
+
77
+
78
+ - **Model type:** Language model
79
+ - **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
80
+ - **License:** Apache 2.0
81
+ - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
82
+ - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
83
+ - **Resources for more information:**
84
+ - [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
85
+ - [GitHub Repo](https://github.com/google-research/t5x)
86
+ - [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
87
+
88
+ # Usage
89
+
90
+ Find below some example scripts on how to use the model in `transformers`:
91
+
92
+ ## Using the Pytorch model
93
+
94
+ ### Running the model on a CPU
95
+
96
+ <details>
97
+ <summary> Click to expand </summary>
98
+
99
+ ```python
100
+
101
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
102
+
103
+ tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
104
+ model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl")
105
+
106
+ input_text = "translate English to German: How old are you?"
107
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
108
+
109
+ outputs = model.generate(input_ids)
110
+ print(tokenizer.decode(outputs[0]))
111
+ ```
112
+
113
+ </details>
114
+
115
+ ### Running the model on a GPU
116
+
117
+ <details>
118
+ <summary> Click to expand </summary>
119
+
120
+ ```python
121
+ # pip install accelerate
122
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
123
+
124
+ tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
125
+ model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto")
126
+
127
+ input_text = "translate English to German: How old are you?"
128
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
129
+
130
+ outputs = model.generate(input_ids)
131
+ print(tokenizer.decode(outputs[0]))
132
+ ```
133
+
134
+ </details>
135
+
136
+ ### Running the model on a GPU using different precisions
137
+
138
+ #### FP16
139
+
140
+ <details>
141
+ <summary> Click to expand </summary>
142
+
143
+ ```python
144
+ # pip install accelerate
145
+ import torch
146
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
147
+
148
+ tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
149
+ model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16)
150
+
151
+ input_text = "translate English to German: How old are you?"
152
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
153
+
154
+ outputs = model.generate(input_ids)
155
+ print(tokenizer.decode(outputs[0]))
156
+ ```
157
+
158
+ </details>
159
+
160
+ #### INT8
161
+
162
+ <details>
163
+ <summary> Click to expand </summary>
164
+
165
+ ```python
166
+ # pip install bitsandbytes accelerate
167
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
168
+
169
+ tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xl")
170
+ model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", load_in_8bit=True)
171
+
172
+ input_text = "translate English to German: How old are you?"
173
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
174
+
175
+ outputs = model.generate(input_ids)
176
+ print(tokenizer.decode(outputs[0]))
177
+ ```
178
+
179
+ </details>
180
+
181
+ # Uses
182
+
183
+ ## Direct Use and Downstream Use
184
+
185
+ The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
186
+
187
+ > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
188
+
189
+ See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
190
+
191
+ ## Out-of-Scope Use
192
+
193
+ More information needed.
194
+
195
+ # Bias, Risks, and Limitations
196
+
197
+ The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
198
+
199
+ > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
200
+
201
+ ## Ethical considerations and risks
202
+
203
+ > Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
204
+
205
+ ## Known Limitations
206
+
207
+ > Flan-T5 has not been tested in real world applications.
208
+
209
+ ## Sensitive Use:
210
+
211
+ > Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
212
+
213
+ # Training Details
214
+
215
+ ## Training Data
216
+
217
+ The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):
218
+
219
+ ![table.png](https://s3.amazonaws.com/moonup/production/uploads/1666363265279-62441d1d9fdefb55a0b7d12c.png)
220
+
221
+
222
+ ## Training Procedure
223
+
224
+ According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
225
+
226
+ > These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
227
+
228
+ The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
229
+
230
+
231
+ # Evaluation
232
+
233
+ ## Testing Data, Factors & Metrics
234
+
235
+ The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:
236
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1668072995230-62441d1d9fdefb55a0b7d12c.png)
237
+ For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
238
+
239
+ ## Results
240
+
241
+ For full results for FLAN-T5-XL, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
242
+
243
+ # Environmental Impact
244
+
245
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
246
+
247
+ - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
248
+ - **Hours used:** More information needed
249
+ - **Cloud Provider:** GCP
250
+ - **Compute Region:** More information needed
251
+ - **Carbon Emitted:** More information needed
252
+
253
+ # Citation
254
+
255
+ **BibTeX:**
256
+
257
+ ```bibtex
258
+ @misc{https://doi.org/10.48550/arxiv.2210.11416,
259
+ doi = {10.48550/ARXIV.2210.11416},
260
+
261
+ url = {https://arxiv.org/abs/2210.11416},
262
+
263
+ author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
264
+
265
+ keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
266
+
267
+ title = {Scaling Instruction-Finetuned Language Models},
268
+
269
+ publisher = {arXiv},
270
+
271
+ year = {2022},
272
+
273
+ copyright = {Creative Commons Attribution 4.0 International}
274
+ }
275
+ ```
276
+
flan-t5-xl/config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "T5ForConditionalGeneration"
4
+ ],
5
+ "d_ff": 5120,
6
+ "d_kv": 64,
7
+ "d_model": 2048,
8
+ "decoder_start_token_id": 0,
9
+ "dropout_rate": 0.1,
10
+ "eos_token_id": 1,
11
+ "feed_forward_proj": "gated-gelu",
12
+ "initializer_factor": 1.0,
13
+ "is_encoder_decoder": true,
14
+ "layer_norm_epsilon": 1e-06,
15
+ "model_type": "t5",
16
+ "n_positions": 512,
17
+ "num_decoder_layers": 24,
18
+ "num_heads": 32,
19
+ "num_layers": 24,
20
+ "output_past": true,
21
+ "pad_token_id": 0,
22
+ "relative_attention_max_distance": 128,
23
+ "relative_attention_num_buckets": 32,
24
+ "task_specific_params": {
25
+ "summarization": {
26
+ "early_stopping": true,
27
+ "length_penalty": 2.0,
28
+ "max_length": 200,
29
+ "min_length": 30,
30
+ "no_repeat_ngram_size": 3,
31
+ "num_beams": 4,
32
+ "prefix": "summarize: "
33
+ },
34
+ "translation_en_to_de": {
35
+ "early_stopping": true,
36
+ "max_length": 300,
37
+ "num_beams": 4,
38
+ "prefix": "translate English to German: "
39
+ },
40
+ "translation_en_to_fr": {
41
+ "early_stopping": true,
42
+ "max_length": 300,
43
+ "num_beams": 4,
44
+ "prefix": "translate English to French: "
45
+ },
46
+ "translation_en_to_ro": {
47
+ "early_stopping": true,
48
+ "max_length": 300,
49
+ "num_beams": 4,
50
+ "prefix": "translate English to Romanian: "
51
+ }
52
+ },
53
+ "tie_word_embeddings": false,
54
+ "torch_dtype": "float32",
55
+ "transformers_version": "4.24.0.dev0",
56
+ "use_cache": true,
57
+ "vocab_size": 32128
58
+ }
flan-t5-xl/generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.27.0.dev0"
7
+ }
flan-t5-xl/model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99196ddfbe886e8ef860f52de979df64890edfc792c3d94ce0502991f347dd18
3
+ size 9449619912
flan-t5-xl/model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0c677ddeb21009b6efd97146f37fc3a0396707fb5e63ade7aff64884dce9806
3
+ size 1949477672
flan-t5-xl/model.safetensors.index.json ADDED
@@ -0,0 +1,567 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 11925413888
4
+ },
5
+ "weight_map": {
6
+ "decoder.block.0.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
7
+ "decoder.block.0.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
8
+ "decoder.block.0.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
9
+ "decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight": "model-00001-of-00002.safetensors",
10
+ "decoder.block.0.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
11
+ "decoder.block.0.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
12
+ "decoder.block.0.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
13
+ "decoder.block.0.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
14
+ "decoder.block.0.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
15
+ "decoder.block.0.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
16
+ "decoder.block.0.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
17
+ "decoder.block.0.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
18
+ "decoder.block.0.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
19
+ "decoder.block.0.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
20
+ "decoder.block.0.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
21
+ "decoder.block.1.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
22
+ "decoder.block.1.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
23
+ "decoder.block.1.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
24
+ "decoder.block.1.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
25
+ "decoder.block.1.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
26
+ "decoder.block.1.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
27
+ "decoder.block.1.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
28
+ "decoder.block.1.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
29
+ "decoder.block.1.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
30
+ "decoder.block.1.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
31
+ "decoder.block.1.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
32
+ "decoder.block.1.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
33
+ "decoder.block.1.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
34
+ "decoder.block.1.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
35
+ "decoder.block.10.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
36
+ "decoder.block.10.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
37
+ "decoder.block.10.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
38
+ "decoder.block.10.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
39
+ "decoder.block.10.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
40
+ "decoder.block.10.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
41
+ "decoder.block.10.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
42
+ "decoder.block.10.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
43
+ "decoder.block.10.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
44
+ "decoder.block.10.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
45
+ "decoder.block.10.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
46
+ "decoder.block.10.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
47
+ "decoder.block.10.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
48
+ "decoder.block.10.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
49
+ "decoder.block.11.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
50
+ "decoder.block.11.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
51
+ "decoder.block.11.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
52
+ "decoder.block.11.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
53
+ "decoder.block.11.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
54
+ "decoder.block.11.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
55
+ "decoder.block.11.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
56
+ "decoder.block.11.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
57
+ "decoder.block.11.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
58
+ "decoder.block.11.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
59
+ "decoder.block.11.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
60
+ "decoder.block.11.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
61
+ "decoder.block.11.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
62
+ "decoder.block.11.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
63
+ "decoder.block.12.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
64
+ "decoder.block.12.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
65
+ "decoder.block.12.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
66
+ "decoder.block.12.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
67
+ "decoder.block.12.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
68
+ "decoder.block.12.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
69
+ "decoder.block.12.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
70
+ "decoder.block.12.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
71
+ "decoder.block.12.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
72
+ "decoder.block.12.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
73
+ "decoder.block.12.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
74
+ "decoder.block.12.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
75
+ "decoder.block.12.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
76
+ "decoder.block.12.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
77
+ "decoder.block.13.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
78
+ "decoder.block.13.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
79
+ "decoder.block.13.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
80
+ "decoder.block.13.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
81
+ "decoder.block.13.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
82
+ "decoder.block.13.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
83
+ "decoder.block.13.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
84
+ "decoder.block.13.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
85
+ "decoder.block.13.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
86
+ "decoder.block.13.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
87
+ "decoder.block.13.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
88
+ "decoder.block.13.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
89
+ "decoder.block.13.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
90
+ "decoder.block.13.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
91
+ "decoder.block.14.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
92
+ "decoder.block.14.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
93
+ "decoder.block.14.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
94
+ "decoder.block.14.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
95
+ "decoder.block.14.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
96
+ "decoder.block.14.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
97
+ "decoder.block.14.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
98
+ "decoder.block.14.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
99
+ "decoder.block.14.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
100
+ "decoder.block.14.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
101
+ "decoder.block.14.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
102
+ "decoder.block.14.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
103
+ "decoder.block.14.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
104
+ "decoder.block.14.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
105
+ "decoder.block.15.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
106
+ "decoder.block.15.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
107
+ "decoder.block.15.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
108
+ "decoder.block.15.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
109
+ "decoder.block.15.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
110
+ "decoder.block.15.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
111
+ "decoder.block.15.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
112
+ "decoder.block.15.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
113
+ "decoder.block.15.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
114
+ "decoder.block.15.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
115
+ "decoder.block.15.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
116
+ "decoder.block.15.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
117
+ "decoder.block.15.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
118
+ "decoder.block.15.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
119
+ "decoder.block.16.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
120
+ "decoder.block.16.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
121
+ "decoder.block.16.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
122
+ "decoder.block.16.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
123
+ "decoder.block.16.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
124
+ "decoder.block.16.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
125
+ "decoder.block.16.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
126
+ "decoder.block.16.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
127
+ "decoder.block.16.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
128
+ "decoder.block.16.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
129
+ "decoder.block.16.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
130
+ "decoder.block.16.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
131
+ "decoder.block.16.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
132
+ "decoder.block.16.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
133
+ "decoder.block.17.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
134
+ "decoder.block.17.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
135
+ "decoder.block.17.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
136
+ "decoder.block.17.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
137
+ "decoder.block.17.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
138
+ "decoder.block.17.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
139
+ "decoder.block.17.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
140
+ "decoder.block.17.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
141
+ "decoder.block.17.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
142
+ "decoder.block.17.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
143
+ "decoder.block.17.layer.2.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
144
+ "decoder.block.17.layer.2.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
145
+ "decoder.block.17.layer.2.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
146
+ "decoder.block.17.layer.2.layer_norm.weight": "model-00002-of-00002.safetensors",
147
+ "decoder.block.18.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
148
+ "decoder.block.18.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
149
+ "decoder.block.18.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
150
+ "decoder.block.18.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
151
+ "decoder.block.18.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
152
+ "decoder.block.18.layer.1.EncDecAttention.k.weight": "model-00002-of-00002.safetensors",
153
+ "decoder.block.18.layer.1.EncDecAttention.o.weight": "model-00002-of-00002.safetensors",
154
+ "decoder.block.18.layer.1.EncDecAttention.q.weight": "model-00002-of-00002.safetensors",
155
+ "decoder.block.18.layer.1.EncDecAttention.v.weight": "model-00002-of-00002.safetensors",
156
+ "decoder.block.18.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
157
+ "decoder.block.18.layer.2.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
158
+ "decoder.block.18.layer.2.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
159
+ "decoder.block.18.layer.2.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
160
+ "decoder.block.18.layer.2.layer_norm.weight": "model-00002-of-00002.safetensors",
161
+ "decoder.block.19.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
162
+ "decoder.block.19.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
163
+ "decoder.block.19.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
164
+ "decoder.block.19.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
165
+ "decoder.block.19.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
166
+ "decoder.block.19.layer.1.EncDecAttention.k.weight": "model-00002-of-00002.safetensors",
167
+ "decoder.block.19.layer.1.EncDecAttention.o.weight": "model-00002-of-00002.safetensors",
168
+ "decoder.block.19.layer.1.EncDecAttention.q.weight": "model-00002-of-00002.safetensors",
169
+ "decoder.block.19.layer.1.EncDecAttention.v.weight": "model-00002-of-00002.safetensors",
170
+ "decoder.block.19.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
171
+ "decoder.block.19.layer.2.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
172
+ "decoder.block.19.layer.2.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
173
+ "decoder.block.19.layer.2.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
174
+ "decoder.block.19.layer.2.layer_norm.weight": "model-00002-of-00002.safetensors",
175
+ "decoder.block.2.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
176
+ "decoder.block.2.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
177
+ "decoder.block.2.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
178
+ "decoder.block.2.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
179
+ "decoder.block.2.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
180
+ "decoder.block.2.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
181
+ "decoder.block.2.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
182
+ "decoder.block.2.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
183
+ "decoder.block.2.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
184
+ "decoder.block.2.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
185
+ "decoder.block.2.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
186
+ "decoder.block.2.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
187
+ "decoder.block.2.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
188
+ "decoder.block.2.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
189
+ "decoder.block.20.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
190
+ "decoder.block.20.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
191
+ "decoder.block.20.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
192
+ "decoder.block.20.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
193
+ "decoder.block.20.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
194
+ "decoder.block.20.layer.1.EncDecAttention.k.weight": "model-00002-of-00002.safetensors",
195
+ "decoder.block.20.layer.1.EncDecAttention.o.weight": "model-00002-of-00002.safetensors",
196
+ "decoder.block.20.layer.1.EncDecAttention.q.weight": "model-00002-of-00002.safetensors",
197
+ "decoder.block.20.layer.1.EncDecAttention.v.weight": "model-00002-of-00002.safetensors",
198
+ "decoder.block.20.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
199
+ "decoder.block.20.layer.2.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
200
+ "decoder.block.20.layer.2.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
201
+ "decoder.block.20.layer.2.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
202
+ "decoder.block.20.layer.2.layer_norm.weight": "model-00002-of-00002.safetensors",
203
+ "decoder.block.21.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
204
+ "decoder.block.21.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
205
+ "decoder.block.21.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
206
+ "decoder.block.21.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
207
+ "decoder.block.21.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
208
+ "decoder.block.21.layer.1.EncDecAttention.k.weight": "model-00002-of-00002.safetensors",
209
+ "decoder.block.21.layer.1.EncDecAttention.o.weight": "model-00002-of-00002.safetensors",
210
+ "decoder.block.21.layer.1.EncDecAttention.q.weight": "model-00002-of-00002.safetensors",
211
+ "decoder.block.21.layer.1.EncDecAttention.v.weight": "model-00002-of-00002.safetensors",
212
+ "decoder.block.21.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
213
+ "decoder.block.21.layer.2.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
214
+ "decoder.block.21.layer.2.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
215
+ "decoder.block.21.layer.2.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
216
+ "decoder.block.21.layer.2.layer_norm.weight": "model-00002-of-00002.safetensors",
217
+ "decoder.block.22.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
218
+ "decoder.block.22.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
219
+ "decoder.block.22.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
220
+ "decoder.block.22.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
221
+ "decoder.block.22.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
222
+ "decoder.block.22.layer.1.EncDecAttention.k.weight": "model-00002-of-00002.safetensors",
223
+ "decoder.block.22.layer.1.EncDecAttention.o.weight": "model-00002-of-00002.safetensors",
224
+ "decoder.block.22.layer.1.EncDecAttention.q.weight": "model-00002-of-00002.safetensors",
225
+ "decoder.block.22.layer.1.EncDecAttention.v.weight": "model-00002-of-00002.safetensors",
226
+ "decoder.block.22.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
227
+ "decoder.block.22.layer.2.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
228
+ "decoder.block.22.layer.2.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
229
+ "decoder.block.22.layer.2.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
230
+ "decoder.block.22.layer.2.layer_norm.weight": "model-00002-of-00002.safetensors",
231
+ "decoder.block.23.layer.0.SelfAttention.k.weight": "model-00002-of-00002.safetensors",
232
+ "decoder.block.23.layer.0.SelfAttention.o.weight": "model-00002-of-00002.safetensors",
233
+ "decoder.block.23.layer.0.SelfAttention.q.weight": "model-00002-of-00002.safetensors",
234
+ "decoder.block.23.layer.0.SelfAttention.v.weight": "model-00002-of-00002.safetensors",
235
+ "decoder.block.23.layer.0.layer_norm.weight": "model-00002-of-00002.safetensors",
236
+ "decoder.block.23.layer.1.EncDecAttention.k.weight": "model-00002-of-00002.safetensors",
237
+ "decoder.block.23.layer.1.EncDecAttention.o.weight": "model-00002-of-00002.safetensors",
238
+ "decoder.block.23.layer.1.EncDecAttention.q.weight": "model-00002-of-00002.safetensors",
239
+ "decoder.block.23.layer.1.EncDecAttention.v.weight": "model-00002-of-00002.safetensors",
240
+ "decoder.block.23.layer.1.layer_norm.weight": "model-00002-of-00002.safetensors",
241
+ "decoder.block.23.layer.2.DenseReluDense.wi_0.weight": "model-00002-of-00002.safetensors",
242
+ "decoder.block.23.layer.2.DenseReluDense.wi_1.weight": "model-00002-of-00002.safetensors",
243
+ "decoder.block.23.layer.2.DenseReluDense.wo.weight": "model-00002-of-00002.safetensors",
244
+ "decoder.block.23.layer.2.layer_norm.weight": "model-00002-of-00002.safetensors",
245
+ "decoder.block.3.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
246
+ "decoder.block.3.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
247
+ "decoder.block.3.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
248
+ "decoder.block.3.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
249
+ "decoder.block.3.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
250
+ "decoder.block.3.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
251
+ "decoder.block.3.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
252
+ "decoder.block.3.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
253
+ "decoder.block.3.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
254
+ "decoder.block.3.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
255
+ "decoder.block.3.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
256
+ "decoder.block.3.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
257
+ "decoder.block.3.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
258
+ "decoder.block.3.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
259
+ "decoder.block.4.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
260
+ "decoder.block.4.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
261
+ "decoder.block.4.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
262
+ "decoder.block.4.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
263
+ "decoder.block.4.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
264
+ "decoder.block.4.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
265
+ "decoder.block.4.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
266
+ "decoder.block.4.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
267
+ "decoder.block.4.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
268
+ "decoder.block.4.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
269
+ "decoder.block.4.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
270
+ "decoder.block.4.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
271
+ "decoder.block.4.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
272
+ "decoder.block.4.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
273
+ "decoder.block.5.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
274
+ "decoder.block.5.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
275
+ "decoder.block.5.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
276
+ "decoder.block.5.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
277
+ "decoder.block.5.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
278
+ "decoder.block.5.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
279
+ "decoder.block.5.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
280
+ "decoder.block.5.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
281
+ "decoder.block.5.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
282
+ "decoder.block.5.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
283
+ "decoder.block.5.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
284
+ "decoder.block.5.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
285
+ "decoder.block.5.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
286
+ "decoder.block.5.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
287
+ "decoder.block.6.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
288
+ "decoder.block.6.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
289
+ "decoder.block.6.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
290
+ "decoder.block.6.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
291
+ "decoder.block.6.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
292
+ "decoder.block.6.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
293
+ "decoder.block.6.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
294
+ "decoder.block.6.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
295
+ "decoder.block.6.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
296
+ "decoder.block.6.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
297
+ "decoder.block.6.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
298
+ "decoder.block.6.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
299
+ "decoder.block.6.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
300
+ "decoder.block.6.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
301
+ "decoder.block.7.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
302
+ "decoder.block.7.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
303
+ "decoder.block.7.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
304
+ "decoder.block.7.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
305
+ "decoder.block.7.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
306
+ "decoder.block.7.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
307
+ "decoder.block.7.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
308
+ "decoder.block.7.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
309
+ "decoder.block.7.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
310
+ "decoder.block.7.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
311
+ "decoder.block.7.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
312
+ "decoder.block.7.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
313
+ "decoder.block.7.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
314
+ "decoder.block.7.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
315
+ "decoder.block.8.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
316
+ "decoder.block.8.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
317
+ "decoder.block.8.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
318
+ "decoder.block.8.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
319
+ "decoder.block.8.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
320
+ "decoder.block.8.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
321
+ "decoder.block.8.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
322
+ "decoder.block.8.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
323
+ "decoder.block.8.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
324
+ "decoder.block.8.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
325
+ "decoder.block.8.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
326
+ "decoder.block.8.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
327
+ "decoder.block.8.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
328
+ "decoder.block.8.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
329
+ "decoder.block.9.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
330
+ "decoder.block.9.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
331
+ "decoder.block.9.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
332
+ "decoder.block.9.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
333
+ "decoder.block.9.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
334
+ "decoder.block.9.layer.1.EncDecAttention.k.weight": "model-00001-of-00002.safetensors",
335
+ "decoder.block.9.layer.1.EncDecAttention.o.weight": "model-00001-of-00002.safetensors",
336
+ "decoder.block.9.layer.1.EncDecAttention.q.weight": "model-00001-of-00002.safetensors",
337
+ "decoder.block.9.layer.1.EncDecAttention.v.weight": "model-00001-of-00002.safetensors",
338
+ "decoder.block.9.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
339
+ "decoder.block.9.layer.2.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
340
+ "decoder.block.9.layer.2.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
341
+ "decoder.block.9.layer.2.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
342
+ "decoder.block.9.layer.2.layer_norm.weight": "model-00001-of-00002.safetensors",
343
+ "decoder.embed_tokens.weight": "model-00001-of-00002.safetensors",
344
+ "decoder.final_layer_norm.weight": "model-00002-of-00002.safetensors",
345
+ "encoder.block.0.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
346
+ "encoder.block.0.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
347
+ "encoder.block.0.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
348
+ "encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight": "model-00001-of-00002.safetensors",
349
+ "encoder.block.0.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
350
+ "encoder.block.0.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
351
+ "encoder.block.0.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
352
+ "encoder.block.0.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
353
+ "encoder.block.0.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
354
+ "encoder.block.0.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
355
+ "encoder.block.1.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
356
+ "encoder.block.1.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
357
+ "encoder.block.1.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
358
+ "encoder.block.1.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
359
+ "encoder.block.1.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
360
+ "encoder.block.1.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
361
+ "encoder.block.1.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
362
+ "encoder.block.1.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
363
+ "encoder.block.1.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
364
+ "encoder.block.10.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
365
+ "encoder.block.10.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
366
+ "encoder.block.10.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
367
+ "encoder.block.10.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
368
+ "encoder.block.10.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
369
+ "encoder.block.10.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
370
+ "encoder.block.10.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
371
+ "encoder.block.10.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
372
+ "encoder.block.10.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
373
+ "encoder.block.11.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
374
+ "encoder.block.11.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
375
+ "encoder.block.11.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
376
+ "encoder.block.11.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
377
+ "encoder.block.11.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
378
+ "encoder.block.11.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
379
+ "encoder.block.11.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
380
+ "encoder.block.11.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
381
+ "encoder.block.11.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
382
+ "encoder.block.12.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
383
+ "encoder.block.12.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
384
+ "encoder.block.12.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
385
+ "encoder.block.12.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
386
+ "encoder.block.12.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
387
+ "encoder.block.12.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
388
+ "encoder.block.12.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
389
+ "encoder.block.12.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
390
+ "encoder.block.12.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
391
+ "encoder.block.13.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
392
+ "encoder.block.13.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
393
+ "encoder.block.13.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
394
+ "encoder.block.13.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
395
+ "encoder.block.13.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
396
+ "encoder.block.13.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
397
+ "encoder.block.13.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
398
+ "encoder.block.13.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
399
+ "encoder.block.13.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
400
+ "encoder.block.14.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
401
+ "encoder.block.14.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
402
+ "encoder.block.14.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
403
+ "encoder.block.14.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
404
+ "encoder.block.14.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
405
+ "encoder.block.14.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
406
+ "encoder.block.14.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
407
+ "encoder.block.14.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
408
+ "encoder.block.14.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
409
+ "encoder.block.15.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
410
+ "encoder.block.15.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
411
+ "encoder.block.15.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
412
+ "encoder.block.15.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
413
+ "encoder.block.15.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
414
+ "encoder.block.15.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
415
+ "encoder.block.15.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
416
+ "encoder.block.15.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
417
+ "encoder.block.15.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
418
+ "encoder.block.16.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
419
+ "encoder.block.16.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
420
+ "encoder.block.16.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
421
+ "encoder.block.16.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
422
+ "encoder.block.16.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
423
+ "encoder.block.16.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
424
+ "encoder.block.16.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
425
+ "encoder.block.16.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
426
+ "encoder.block.16.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
427
+ "encoder.block.17.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
428
+ "encoder.block.17.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
429
+ "encoder.block.17.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
430
+ "encoder.block.17.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
431
+ "encoder.block.17.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
432
+ "encoder.block.17.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
433
+ "encoder.block.17.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
434
+ "encoder.block.17.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
435
+ "encoder.block.17.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
436
+ "encoder.block.18.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
437
+ "encoder.block.18.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
438
+ "encoder.block.18.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
439
+ "encoder.block.18.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
440
+ "encoder.block.18.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
441
+ "encoder.block.18.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
442
+ "encoder.block.18.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
443
+ "encoder.block.18.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
444
+ "encoder.block.18.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
445
+ "encoder.block.19.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
446
+ "encoder.block.19.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
447
+ "encoder.block.19.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
448
+ "encoder.block.19.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
449
+ "encoder.block.19.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
450
+ "encoder.block.19.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
451
+ "encoder.block.19.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
452
+ "encoder.block.19.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
453
+ "encoder.block.19.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
454
+ "encoder.block.2.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
455
+ "encoder.block.2.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
456
+ "encoder.block.2.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
457
+ "encoder.block.2.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
458
+ "encoder.block.2.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
459
+ "encoder.block.2.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
460
+ "encoder.block.2.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
461
+ "encoder.block.2.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
462
+ "encoder.block.2.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
463
+ "encoder.block.20.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
464
+ "encoder.block.20.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
465
+ "encoder.block.20.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
466
+ "encoder.block.20.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
467
+ "encoder.block.20.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
468
+ "encoder.block.20.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
469
+ "encoder.block.20.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
470
+ "encoder.block.20.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
471
+ "encoder.block.20.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
472
+ "encoder.block.21.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
473
+ "encoder.block.21.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
474
+ "encoder.block.21.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
475
+ "encoder.block.21.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
476
+ "encoder.block.21.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
477
+ "encoder.block.21.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
478
+ "encoder.block.21.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
479
+ "encoder.block.21.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
480
+ "encoder.block.21.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
481
+ "encoder.block.22.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
482
+ "encoder.block.22.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
483
+ "encoder.block.22.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
484
+ "encoder.block.22.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
485
+ "encoder.block.22.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
486
+ "encoder.block.22.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
487
+ "encoder.block.22.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
488
+ "encoder.block.22.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
489
+ "encoder.block.22.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
490
+ "encoder.block.23.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
491
+ "encoder.block.23.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
492
+ "encoder.block.23.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
493
+ "encoder.block.23.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
494
+ "encoder.block.23.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
495
+ "encoder.block.23.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
496
+ "encoder.block.23.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
497
+ "encoder.block.23.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
498
+ "encoder.block.23.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
499
+ "encoder.block.3.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
500
+ "encoder.block.3.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
501
+ "encoder.block.3.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
502
+ "encoder.block.3.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
503
+ "encoder.block.3.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
504
+ "encoder.block.3.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
505
+ "encoder.block.3.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
506
+ "encoder.block.3.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
507
+ "encoder.block.3.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
508
+ "encoder.block.4.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
509
+ "encoder.block.4.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
510
+ "encoder.block.4.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
511
+ "encoder.block.4.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
512
+ "encoder.block.4.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
513
+ "encoder.block.4.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
514
+ "encoder.block.4.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
515
+ "encoder.block.4.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
516
+ "encoder.block.4.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
517
+ "encoder.block.5.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
518
+ "encoder.block.5.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
519
+ "encoder.block.5.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
520
+ "encoder.block.5.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
521
+ "encoder.block.5.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
522
+ "encoder.block.5.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
523
+ "encoder.block.5.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
524
+ "encoder.block.5.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
525
+ "encoder.block.5.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
526
+ "encoder.block.6.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
527
+ "encoder.block.6.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
528
+ "encoder.block.6.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
529
+ "encoder.block.6.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
530
+ "encoder.block.6.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
531
+ "encoder.block.6.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
532
+ "encoder.block.6.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
533
+ "encoder.block.6.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
534
+ "encoder.block.6.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
535
+ "encoder.block.7.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
536
+ "encoder.block.7.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
537
+ "encoder.block.7.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
538
+ "encoder.block.7.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
539
+ "encoder.block.7.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
540
+ "encoder.block.7.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
541
+ "encoder.block.7.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
542
+ "encoder.block.7.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
543
+ "encoder.block.7.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
544
+ "encoder.block.8.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
545
+ "encoder.block.8.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
546
+ "encoder.block.8.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
547
+ "encoder.block.8.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
548
+ "encoder.block.8.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
549
+ "encoder.block.8.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
550
+ "encoder.block.8.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
551
+ "encoder.block.8.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
552
+ "encoder.block.8.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
553
+ "encoder.block.9.layer.0.SelfAttention.k.weight": "model-00001-of-00002.safetensors",
554
+ "encoder.block.9.layer.0.SelfAttention.o.weight": "model-00001-of-00002.safetensors",
555
+ "encoder.block.9.layer.0.SelfAttention.q.weight": "model-00001-of-00002.safetensors",
556
+ "encoder.block.9.layer.0.SelfAttention.v.weight": "model-00001-of-00002.safetensors",
557
+ "encoder.block.9.layer.0.layer_norm.weight": "model-00001-of-00002.safetensors",
558
+ "encoder.block.9.layer.1.DenseReluDense.wi_0.weight": "model-00001-of-00002.safetensors",
559
+ "encoder.block.9.layer.1.DenseReluDense.wi_1.weight": "model-00001-of-00002.safetensors",
560
+ "encoder.block.9.layer.1.DenseReluDense.wo.weight": "model-00001-of-00002.safetensors",
561
+ "encoder.block.9.layer.1.layer_norm.weight": "model-00001-of-00002.safetensors",
562
+ "encoder.embed_tokens.weight": "model-00001-of-00002.safetensors",
563
+ "encoder.final_layer_norm.weight": "model-00001-of-00002.safetensors",
564
+ "lm_head.weight": "model-00002-of-00002.safetensors",
565
+ "shared.weight": "model-00001-of-00002.safetensors"
566
+ }
567
+ }
flan-t5-xl/special_tokens_map.json ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<extra_id_0>",
4
+ "<extra_id_1>",
5
+ "<extra_id_2>",
6
+ "<extra_id_3>",
7
+ "<extra_id_4>",
8
+ "<extra_id_5>",
9
+ "<extra_id_6>",
10
+ "<extra_id_7>",
11
+ "<extra_id_8>",
12
+ "<extra_id_9>",
13
+ "<extra_id_10>",
14
+ "<extra_id_11>",
15
+ "<extra_id_12>",
16
+ "<extra_id_13>",
17
+ "<extra_id_14>",
18
+ "<extra_id_15>",
19
+ "<extra_id_16>",
20
+ "<extra_id_17>",
21
+ "<extra_id_18>",
22
+ "<extra_id_19>",
23
+ "<extra_id_20>",
24
+ "<extra_id_21>",
25
+ "<extra_id_22>",
26
+ "<extra_id_23>",
27
+ "<extra_id_24>",
28
+ "<extra_id_25>",
29
+ "<extra_id_26>",
30
+ "<extra_id_27>",
31
+ "<extra_id_28>",
32
+ "<extra_id_29>",
33
+ "<extra_id_30>",
34
+ "<extra_id_31>",
35
+ "<extra_id_32>",
36
+ "<extra_id_33>",
37
+ "<extra_id_34>",
38
+ "<extra_id_35>",
39
+ "<extra_id_36>",
40
+ "<extra_id_37>",
41
+ "<extra_id_38>",
42
+ "<extra_id_39>",
43
+ "<extra_id_40>",
44
+ "<extra_id_41>",
45
+ "<extra_id_42>",
46
+ "<extra_id_43>",
47
+ "<extra_id_44>",
48
+ "<extra_id_45>",
49
+ "<extra_id_46>",
50
+ "<extra_id_47>",
51
+ "<extra_id_48>",
52
+ "<extra_id_49>",
53
+ "<extra_id_50>",
54
+ "<extra_id_51>",
55
+ "<extra_id_52>",
56
+ "<extra_id_53>",
57
+ "<extra_id_54>",
58
+ "<extra_id_55>",
59
+ "<extra_id_56>",
60
+ "<extra_id_57>",
61
+ "<extra_id_58>",
62
+ "<extra_id_59>",
63
+ "<extra_id_60>",
64
+ "<extra_id_61>",
65
+ "<extra_id_62>",
66
+ "<extra_id_63>",
67
+ "<extra_id_64>",
68
+ "<extra_id_65>",
69
+ "<extra_id_66>",
70
+ "<extra_id_67>",
71
+ "<extra_id_68>",
72
+ "<extra_id_69>",
73
+ "<extra_id_70>",
74
+ "<extra_id_71>",
75
+ "<extra_id_72>",
76
+ "<extra_id_73>",
77
+ "<extra_id_74>",
78
+ "<extra_id_75>",
79
+ "<extra_id_76>",
80
+ "<extra_id_77>",
81
+ "<extra_id_78>",
82
+ "<extra_id_79>",
83
+ "<extra_id_80>",
84
+ "<extra_id_81>",
85
+ "<extra_id_82>",
86
+ "<extra_id_83>",
87
+ "<extra_id_84>",
88
+ "<extra_id_85>",
89
+ "<extra_id_86>",
90
+ "<extra_id_87>",
91
+ "<extra_id_88>",
92
+ "<extra_id_89>",
93
+ "<extra_id_90>",
94
+ "<extra_id_91>",
95
+ "<extra_id_92>",
96
+ "<extra_id_93>",
97
+ "<extra_id_94>",
98
+ "<extra_id_95>",
99
+ "<extra_id_96>",
100
+ "<extra_id_97>",
101
+ "<extra_id_98>",
102
+ "<extra_id_99>"
103
+ ],
104
+ "eos_token": "</s>",
105
+ "pad_token": "<pad>",
106
+ "unk_token": "<unk>"
107
+ }
flan-t5-xl/spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
3
+ size 791656
flan-t5-xl/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
flan-t5-xl/tokenizer_config.json ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<extra_id_0>",
4
+ "<extra_id_1>",
5
+ "<extra_id_2>",
6
+ "<extra_id_3>",
7
+ "<extra_id_4>",
8
+ "<extra_id_5>",
9
+ "<extra_id_6>",
10
+ "<extra_id_7>",
11
+ "<extra_id_8>",
12
+ "<extra_id_9>",
13
+ "<extra_id_10>",
14
+ "<extra_id_11>",
15
+ "<extra_id_12>",
16
+ "<extra_id_13>",
17
+ "<extra_id_14>",
18
+ "<extra_id_15>",
19
+ "<extra_id_16>",
20
+ "<extra_id_17>",
21
+ "<extra_id_18>",
22
+ "<extra_id_19>",
23
+ "<extra_id_20>",
24
+ "<extra_id_21>",
25
+ "<extra_id_22>",
26
+ "<extra_id_23>",
27
+ "<extra_id_24>",
28
+ "<extra_id_25>",
29
+ "<extra_id_26>",
30
+ "<extra_id_27>",
31
+ "<extra_id_28>",
32
+ "<extra_id_29>",
33
+ "<extra_id_30>",
34
+ "<extra_id_31>",
35
+ "<extra_id_32>",
36
+ "<extra_id_33>",
37
+ "<extra_id_34>",
38
+ "<extra_id_35>",
39
+ "<extra_id_36>",
40
+ "<extra_id_37>",
41
+ "<extra_id_38>",
42
+ "<extra_id_39>",
43
+ "<extra_id_40>",
44
+ "<extra_id_41>",
45
+ "<extra_id_42>",
46
+ "<extra_id_43>",
47
+ "<extra_id_44>",
48
+ "<extra_id_45>",
49
+ "<extra_id_46>",
50
+ "<extra_id_47>",
51
+ "<extra_id_48>",
52
+ "<extra_id_49>",
53
+ "<extra_id_50>",
54
+ "<extra_id_51>",
55
+ "<extra_id_52>",
56
+ "<extra_id_53>",
57
+ "<extra_id_54>",
58
+ "<extra_id_55>",
59
+ "<extra_id_56>",
60
+ "<extra_id_57>",
61
+ "<extra_id_58>",
62
+ "<extra_id_59>",
63
+ "<extra_id_60>",
64
+ "<extra_id_61>",
65
+ "<extra_id_62>",
66
+ "<extra_id_63>",
67
+ "<extra_id_64>",
68
+ "<extra_id_65>",
69
+ "<extra_id_66>",
70
+ "<extra_id_67>",
71
+ "<extra_id_68>",
72
+ "<extra_id_69>",
73
+ "<extra_id_70>",
74
+ "<extra_id_71>",
75
+ "<extra_id_72>",
76
+ "<extra_id_73>",
77
+ "<extra_id_74>",
78
+ "<extra_id_75>",
79
+ "<extra_id_76>",
80
+ "<extra_id_77>",
81
+ "<extra_id_78>",
82
+ "<extra_id_79>",
83
+ "<extra_id_80>",
84
+ "<extra_id_81>",
85
+ "<extra_id_82>",
86
+ "<extra_id_83>",
87
+ "<extra_id_84>",
88
+ "<extra_id_85>",
89
+ "<extra_id_86>",
90
+ "<extra_id_87>",
91
+ "<extra_id_88>",
92
+ "<extra_id_89>",
93
+ "<extra_id_90>",
94
+ "<extra_id_91>",
95
+ "<extra_id_92>",
96
+ "<extra_id_93>",
97
+ "<extra_id_94>",
98
+ "<extra_id_95>",
99
+ "<extra_id_96>",
100
+ "<extra_id_97>",
101
+ "<extra_id_98>",
102
+ "<extra_id_99>"
103
+ ],
104
+ "eos_token": "</s>",
105
+ "extra_ids": 100,
106
+ "model_max_length": 512,
107
+ "name_or_path": "google/t5-v1_1-small",
108
+ "pad_token": "<pad>",
109
+ "sp_model_kwargs": {},
110
+ "special_tokens_map_file": "/home/arthur_huggingface_co/.cache/huggingface/hub/models--google--t5-v1_1-small/snapshots/fb7e6cba609f7bab11c614294bc04f82f613c7b1/special_tokens_map.json",
111
+ "tokenizer_class": "T5Tokenizer",
112
+ "unk_token": "<unk>"
113
+ }
infinity_vae_d56_f8_14_patchify.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4c695350125593dde7ec7120e6dff7039a9adea01516f6e14758eb94e3ec6c2
3
+ size 1215341746