ridger commited on
Commit
cfe6f63
·
verified ·
1 Parent(s): 18f36b9

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/logo.png filter=lfs diff=lfs merge=lfs -text
37
+ assets/ouro_thinking.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ tags:
6
+ - looped-language-model
7
+ - reasoning
8
+ - recurrent-depth
9
+ - thinking
10
+ - chain-of-thought
11
+ ---
12
+
13
+ # Ouro-2.6B-Thinking
14
+
15
+ ![Ouro Logo](assets/logo.png)
16
+
17
+ ## Model Description
18
+
19
+
20
+ **⚠️ IMPORTANT: This model is intended for research purposes only. It is provided as-is without warranties for production use. **
21
+
22
+
23
+ **Ouro-2.6B-Thinking** is a reasoning-specialized variant of the Ouro-2.6B base model, enhanced through supervised fine-tuning on high-quality reasoning data. Please use ``transformers==4.54.1``for compatibility.
24
+
25
+ ![Thinking Model Performance](assets/ouro_thinking.png)
26
+
27
+ ## Key Features
28
+
29
+ - **Advanced Reasoning**: Specifically optimized for mathematical and scientific reasoning tasks
30
+ - **Compact Size**: Competitive with 4B models despite having only 2.6B parameters
31
+ - **Faithful Chain-of-Thought**: Iterative latent updates yield causally faithful reasoning traces
32
+ - **Cross-Step Consistency**: Intermediate recurrent outputs can serve as reliable proxies for final answers
33
+ - **Explicit Thinking Process**: Trained to generate detailed reasoning steps
34
+
35
+ ## Model Architecture
36
+
37
+ Based on Ouro-2.6B with additional reasoning fine-tuning:
38
+
39
+ | Configuration | Value |
40
+ |:---|:---|
41
+ | **Parameters** | 2.6B |
42
+ | **Layers** | 24 |
43
+ | **Recurrent Steps** | 4 |
44
+ | **Hidden Size** | 2048 |
45
+ | **Attention Heads** | Multi-Head Attention (MHA) |
46
+ | **FFN Activation** | SwiGLU |
47
+ | **Position Embedding** | RoPE |
48
+ | **Vocabulary Size** | 49,152 |
49
+ | **Context Length** | 32K (SFT) |
50
+ | **Normalization** | Sandwich RMSNorm |
51
+
52
+ ## Training Details
53
+
54
+ ### Pre-training
55
+ - **Training Tokens**: 7.7T tokens across 4 stages
56
+ - **Base Architecture**: Ouro-2.6B
57
+
58
+ ### Supervised Fine-Tuning
59
+ - **Data Size**: ~8.3M examples
60
+ - **Data Composition**:
61
+ - Mathematics: 3.5M examples (OpenThoughts3, AceReason-1.1-SFT)
62
+ - Code: 3.2M examples (AceReason, OpenCodeReasoning, Llama-Nemotron, OpenThoughts3)
63
+ - Science: 808K examples (OpenThoughts3, Llama-Nemotron)
64
+ - Chat: 767K examples (DeepWriting-20K)
65
+ - **Training**: 2 epochs, max sequence length 32K
66
+ - **Optimizer**: Adam (lr=2×10⁻⁵, β=(0.9, 0.95))
67
+ - **Scheduler**: Cosine decay
68
+
69
+ ## Quick Start
70
+
71
+ **⚠️ IMPORTANT**: Please use `transformers<4.56.0` to avoid compatibility issues. We recommend `transformers==4.54.1` or earlier versions.
72
+
73
+ ```python
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+
76
+ model_name = "Bytedance/Ouro-2.6B-Thinking"
77
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
78
+ model = AutoModelForCausalLM.from_pretrained(
79
+ model_name,
80
+ device_map="auto",
81
+ torch_dtype="auto"
82
+ )
83
+
84
+ # Generate with reasoning
85
+ messages = [
86
+ {"role": "user", "content": "Solve: If 2x + 3 = 11, what is x?"}
87
+ ]
88
+ inputs = tokenizer.apply_chat_template(
89
+ messages,
90
+ tokenize=True,
91
+ add_generation_prompt=True,
92
+ return_tensors="pt"
93
+ ).to(model.device)
94
+
95
+ outputs = model.generate(inputs, max_new_tokens=512, temperature=1.0, top_p=0.7)
96
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
97
+ ```
98
+
99
+ ## Citation
100
+
101
+ ```bibtex
102
+ @article{ouro2025,
103
+ title={Scaling Latent Reasoning via Looped Language Models},
104
+ author={Zhu, Rui-Jie and Wang, Zixuan and Hua, Kai and Zhang, Tianyu and Li, Ziniu and Que, Haoran and Wei, Boyi and Yin, Fan and Wen, Zixin and Xing, He and others},
105
+ journal={arXiv preprint},
106
+ year={2025}
107
+ }
108
+ ```
109
+
110
+ ## License
111
+
112
+ This model is licensed under Apache-2.0. See the LICENSE file for details.
113
+
114
+ ## Project Links
115
+
116
+ - **Paper**: [Scaling Latent Reasoning via Looped Language Models](https://ouro-llm.github.io)
117
+ - **Project Page**: [https://ouro-llm.github.io](https://ouro-llm.github.io)
118
+
119
+ ---
120
+
121
+
assets/logo.png ADDED

Git LFS Details

  • SHA256: b971dc437f16af7155034488be23ca64039d66a85348a24710d7c6459f984484
  • Pointer size: 131 Bytes
  • Size of remote file: 657 kB
assets/ouro_thinking.png ADDED

Git LFS Details

  • SHA256: 79b5db34e6094ce62a3270dc4274403bdd7922140520b49240363e10983be841
  • Pointer size: 131 Bytes
  • Size of remote file: 516 kB
config.json ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "OuroForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_ouro.OuroConfig",
8
+ "AutoModel": "modeling_ouro.OuroModel",
9
+ "AutoModelForCausalLM": "modeling_ouro.OuroForCausalLM"
10
+ },
11
+ "bos_token_id": 1,
12
+ "eos_token_id": 2,
13
+ "head_dim": 128,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 2048,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 5632,
18
+ "layer_types": [
19
+ "full_attention",
20
+ "full_attention",
21
+ "full_attention",
22
+ "full_attention",
23
+ "full_attention",
24
+ "full_attention",
25
+ "full_attention",
26
+ "full_attention",
27
+ "full_attention",
28
+ "full_attention",
29
+ "full_attention",
30
+ "full_attention",
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention",
34
+ "full_attention",
35
+ "full_attention",
36
+ "full_attention",
37
+ "full_attention",
38
+ "full_attention",
39
+ "full_attention",
40
+ "full_attention",
41
+ "full_attention",
42
+ "full_attention",
43
+ "full_attention",
44
+ "full_attention",
45
+ "full_attention",
46
+ "full_attention",
47
+ "full_attention",
48
+ "full_attention",
49
+ "full_attention",
50
+ "full_attention",
51
+ "full_attention",
52
+ "full_attention",
53
+ "full_attention",
54
+ "full_attention",
55
+ "full_attention",
56
+ "full_attention",
57
+ "full_attention",
58
+ "full_attention",
59
+ "full_attention",
60
+ "full_attention",
61
+ "full_attention",
62
+ "full_attention",
63
+ "full_attention",
64
+ "full_attention",
65
+ "full_attention",
66
+ "full_attention"
67
+ ],
68
+ "max_position_embeddings": 65536,
69
+ "max_window_layers": 48,
70
+ "model_type": "ouro",
71
+ "num_attention_heads": 16,
72
+ "num_hidden_layers": 48,
73
+ "num_key_value_heads": 16,
74
+ "rms_norm_eps": 1e-06,
75
+ "rope_scaling": null,
76
+ "rope_theta": 1000000.0,
77
+ "sliding_window": null,
78
+ "tie_word_embeddings": false,
79
+ "torch_dtype": "bfloat16",
80
+ "total_ut_steps": 4,
81
+ "transformers_version": "4.55.0",
82
+ "use_cache": true,
83
+ "use_sliding_window": false,
84
+ "vocab_size": 49152
85
+ }
configuration_ouro.py ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Ouro model configuration"""
16
+
17
+ from transformers.configuration_utils import PretrainedConfig, layer_type_validation
18
+ from transformers.modeling_rope_utils import rope_config_validation
19
+ from transformers.utils import logging
20
+
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+
25
+ class OuroConfig(PretrainedConfig):
26
+ r"""
27
+ This is the configuration class to store the configuration of a [`OuroModel`]. It is used to instantiate a
28
+ Ouro model according to the specified arguments, defining the model architecture. Instantiating a configuration
29
+ with the defaults will yield a similar configuration to that of
30
+ Ouro-7B-beta [Qwen/Ouro-7B-beta](https://huggingface.co/Qwen/Ouro-7B-beta).
31
+
32
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
33
+ documentation from [`PretrainedConfig`] for more information.
34
+
35
+
36
+ Args:
37
+ vocab_size (`int`, *optional*, defaults to 151936):
38
+ Vocabulary size of the Ouro model. Defines the number of different tokens that can be represented by the
39
+ `inputs_ids` passed when calling [`OuroModel`]
40
+ hidden_size (`int`, *optional*, defaults to 4096):
41
+ Dimension of the hidden representations.
42
+ intermediate_size (`int`, *optional*, defaults to 22016):
43
+ Dimension of the MLP representations.
44
+ num_hidden_layers (`int`, *optional*, defaults to 32):
45
+ Number of hidden layers in the Transformer encoder.
46
+ num_attention_heads (`int`, *optional*, defaults to 32):
47
+ Number of attention heads for each attention layer in the Transformer encoder.
48
+ num_key_value_heads (`int`, *optional*, defaults to 32):
49
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
50
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
51
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
52
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
53
+ by meanpooling all the original heads within that group. For more details, check out [this
54
+ paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to `32`.
55
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
56
+ The non-linear activation function (function or string) in the decoder.
57
+ max_position_embeddings (`int`, *optional*, defaults to 32768):
58
+ The maximum sequence length that this model might ever be used with.
59
+ initializer_range (`float`, *optional*, defaults to 0.02):
60
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
61
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
62
+ The epsilon used by the rms normalization layers.
63
+ use_cache (`bool`, *optional*, defaults to `True`):
64
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
65
+ relevant if `config.is_decoder=True`.
66
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
67
+ Whether the model's input and output word embeddings should be tied.
68
+ rope_theta (`float`, *optional*, defaults to 10000.0):
69
+ The base period of the RoPE embeddings.
70
+ rope_scaling (`Dict`, *optional*):
71
+ Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
72
+ and you expect the model to work on longer `max_position_embeddings`, we recommend you to update this value
73
+ accordingly.
74
+ Expected contents:
75
+ `rope_type` (`str`):
76
+ The sub-variant of RoPE to use. Can be one of ['default', 'linear', 'dynamic', 'yarn', 'longrope',
77
+ 'llama3'], with 'default' being the original RoPE implementation.
78
+ `factor` (`float`, *optional*):
79
+ Used with all rope types except 'default'. The scaling factor to apply to the RoPE embeddings. In
80
+ most scaling types, a `factor` of x will enable the model to handle sequences of length x *
81
+ original maximum pre-trained length.
82
+ `original_max_position_embeddings` (`int`, *optional*):
83
+ Used with 'dynamic', 'longrope' and 'llama3'. The original max position embeddings used during
84
+ pretraining.
85
+ `attention_factor` (`float`, *optional*):
86
+ Used with 'yarn' and 'longrope'. The scaling factor to be applied on the attention
87
+ computation. If unspecified, it defaults to value recommended by the implementation, using the
88
+ `factor` field to infer the suggested value.
89
+ `beta_fast` (`float`, *optional*):
90
+ Only used with 'yarn'. Parameter to set the boundary for extrapolation (only) in the linear
91
+ ramp function. If unspecified, it defaults to 32.
92
+ `beta_slow` (`float`, *optional*):
93
+ Only used with 'yarn'. Parameter to set the boundary for interpolation (only) in the linear
94
+ ramp function. If unspecified, it defaults to 1.
95
+ `short_factor` (`list[float]`, *optional*):
96
+ Only used with 'longrope'. The scaling factor to be applied to short contexts (<
97
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
98
+ size divided by the number of attention heads divided by 2
99
+ `long_factor` (`list[float]`, *optional*):
100
+ Only used with 'longrope'. The scaling factor to be applied to long contexts (<
101
+ `original_max_position_embeddings`). Must be a list of numbers with the same length as the hidden
102
+ size divided by the number of attention heads divided by 2
103
+ `low_freq_factor` (`float`, *optional*):
104
+ Only used with 'llama3'. Scaling factor applied to low frequency components of the RoPE
105
+ `high_freq_factor` (`float`, *optional*):
106
+ Only used with 'llama3'. Scaling factor applied to high frequency components of the RoPE
107
+ use_sliding_window (`bool`, *optional*, defaults to `False`):
108
+ Whether to use sliding window attention.
109
+ sliding_window (`int`, *optional*, defaults to 4096):
110
+ Sliding window attention (SWA) window size. If not specified, will default to `4096`.
111
+ max_window_layers (`int`, *optional*, defaults to 28):
112
+ The number of layers using full attention. The first `max_window_layers` layers will use full attention, while any
113
+ additional layer afterwards will use SWA (Sliding Window Attention).
114
+ layer_types (`list`, *optional*):
115
+ Attention pattern for each layer.
116
+ attention_dropout (`float`, *optional*, defaults to 0.0):
117
+ The dropout ratio for the attention probabilities.
118
+
119
+ ```python
120
+ >>> from transformers import OuroModel, OuroConfig
121
+
122
+ >>> # Initializing a Ouro style configuration
123
+ >>> configuration = OuroConfig()
124
+
125
+ >>> # Initializing a model from the Ouro-7B style configuration
126
+ >>> model = OuroModel(configuration)
127
+
128
+ >>> # Accessing the model configuration
129
+ >>> configuration = model.config
130
+ ```"""
131
+
132
+ model_type = "ouro"
133
+ keys_to_ignore_at_inference = ["past_key_values"]
134
+
135
+ # Default tensor parallel plan for base model `Ouro`
136
+ base_model_tp_plan = {
137
+ "layers.*.self_attn.q_proj": "colwise",
138
+ "layers.*.self_attn.k_proj": "colwise",
139
+ "layers.*.self_attn.v_proj": "colwise",
140
+ "layers.*.self_attn.o_proj": "rowwise",
141
+ "layers.*.mlp.gate_proj": "colwise",
142
+ "layers.*.mlp.up_proj": "colwise",
143
+ "layers.*.mlp.down_proj": "rowwise",
144
+ }
145
+ base_model_pp_plan = {
146
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
147
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
148
+ "norm": (["hidden_states"], ["hidden_states"]),
149
+ }
150
+
151
+ def __init__(
152
+ self,
153
+ vocab_size=151936,
154
+ hidden_size=4096,
155
+ intermediate_size=22016,
156
+ num_hidden_layers=32,
157
+ num_attention_heads=32,
158
+ num_key_value_heads=32,
159
+ hidden_act="silu",
160
+ max_position_embeddings=32768,
161
+ initializer_range=0.02,
162
+ rms_norm_eps=1e-6,
163
+ use_cache=True,
164
+ tie_word_embeddings=False,
165
+ rope_theta=10000.0,
166
+ rope_scaling=None,
167
+ use_sliding_window=False,
168
+ sliding_window=4096,
169
+ max_window_layers=28,
170
+ layer_types=None,
171
+ attention_dropout=0.0,
172
+ **kwargs,
173
+ ):
174
+ self.vocab_size = vocab_size
175
+ self.max_position_embeddings = max_position_embeddings
176
+ self.hidden_size = hidden_size
177
+ self.intermediate_size = intermediate_size
178
+ self.num_hidden_layers = num_hidden_layers
179
+ self.num_attention_heads = num_attention_heads
180
+ self.use_sliding_window = use_sliding_window
181
+ self.sliding_window = sliding_window if self.use_sliding_window else None
182
+ self.max_window_layers = max_window_layers
183
+
184
+ # for backward compatibility
185
+ if num_key_value_heads is None:
186
+ num_key_value_heads = num_attention_heads
187
+
188
+ self.num_key_value_heads = num_key_value_heads
189
+ self.hidden_act = hidden_act
190
+ self.initializer_range = initializer_range
191
+ self.rms_norm_eps = rms_norm_eps
192
+ self.use_cache = use_cache
193
+ self.rope_theta = rope_theta
194
+ self.rope_scaling = rope_scaling
195
+ self.attention_dropout = attention_dropout
196
+ # Validate the correctness of rotary position embeddings parameters
197
+ # BC: if there is a 'type' field, move it to 'rope_type'.
198
+ if self.rope_scaling is not None and "type" in self.rope_scaling:
199
+ self.rope_scaling["rope_type"] = self.rope_scaling["type"]
200
+ rope_config_validation(self)
201
+
202
+ self.layer_types = layer_types
203
+ if self.layer_types is None:
204
+ self.layer_types = [
205
+ "sliding_attention"
206
+ if self.sliding_window is not None and i >= self.max_window_layers
207
+ else "full_attention"
208
+ for i in range(self.num_hidden_layers)
209
+ ]
210
+ layer_type_validation(self.layer_types)
211
+
212
+ super().__init__(
213
+ tie_word_embeddings=tie_word_embeddings,
214
+ **kwargs,
215
+ )
216
+
217
+
218
+ __all__ = ["OuroConfig"]
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c506a79247dc51fc0400d789365c3d43932f718abce9810f3606ace47d0a3080
3
+ size 5336011242
modeling_ouro.py ADDED
@@ -0,0 +1,594 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Callable, Optional, Union
2
+
3
+ import torch
4
+ from torch import nn
5
+
6
+ from transformers.activations import ACT2FN
7
+ from transformers.cache_utils import Cache, DynamicCache
8
+ from transformers.generation import GenerationMixin
9
+ from transformers.integrations import use_kernel_forward_from_hub
10
+ from transformers.masking_utils import create_causal_mask, create_sliding_window_causal_mask
11
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
12
+ from transformers.modeling_layers import (
13
+ GenericForQuestionAnswering,
14
+ GenericForSequenceClassification,
15
+ GenericForTokenClassification,
16
+ GradientCheckpointingLayer,
17
+ )
18
+ from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
19
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
20
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
21
+ from transformers.processing_utils import Unpack
22
+ from transformers.utils import TransformersKwargs, auto_docstring, can_return_tuple
23
+ from transformers.utils.generic import check_model_inputs
24
+ from .configuration_ouro import OuroConfig
25
+
26
+
27
+ class OuroMLP(nn.Module):
28
+ def __init__(self, config):
29
+ super().__init__()
30
+ self.config = config
31
+ self.hidden_size = config.hidden_size
32
+ self.intermediate_size = config.intermediate_size
33
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
34
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
35
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
36
+ self.act_fn = ACT2FN[config.hidden_act]
37
+
38
+ def forward(self, x):
39
+ down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
40
+ return down_proj
41
+
42
+
43
+ def rotate_half(x):
44
+ """Rotates half the hidden dims of the input."""
45
+ x1 = x[..., : x.shape[-1] // 2]
46
+ x2 = x[..., x.shape[-1] // 2 :]
47
+ return torch.cat((-x2, x1), dim=-1)
48
+
49
+
50
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
51
+ """Applies Rotary Position Embedding to the query and key tensors.
52
+
53
+ Args:
54
+ q (`torch.Tensor`): The query tensor.
55
+ k (`torch.Tensor`): The key tensor.
56
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
57
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
58
+ position_ids (`torch.Tensor`, *optional*):
59
+ Deprecated and unused.
60
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
61
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
62
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
63
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
64
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
65
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
66
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
67
+ Returns:
68
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
69
+ """
70
+ cos = cos.unsqueeze(unsqueeze_dim)
71
+ sin = sin.unsqueeze(unsqueeze_dim)
72
+ q_embed = (q * cos) + (rotate_half(q) * sin)
73
+ k_embed = (k * cos) + (rotate_half(k) * sin)
74
+ return q_embed, k_embed
75
+
76
+
77
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
78
+ """
79
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
80
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
81
+ """
82
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
83
+ if n_rep == 1:
84
+ return hidden_states
85
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
86
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
87
+
88
+
89
+ def eager_attention_forward(
90
+ module: nn.Module,
91
+ query: torch.Tensor,
92
+ key: torch.Tensor,
93
+ value: torch.Tensor,
94
+ attention_mask: Optional[torch.Tensor],
95
+ scaling: float,
96
+ dropout: float = 0.0,
97
+ **kwargs: Unpack[TransformersKwargs],
98
+ ):
99
+ key_states = repeat_kv(key, module.num_key_value_groups)
100
+ value_states = repeat_kv(value, module.num_key_value_groups)
101
+
102
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
103
+ if attention_mask is not None:
104
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
105
+ attn_weights = attn_weights + causal_mask
106
+
107
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
108
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
109
+ attn_output = torch.matmul(attn_weights, value_states)
110
+ attn_output = attn_output.transpose(1, 2).contiguous()
111
+
112
+ return attn_output, attn_weights
113
+
114
+
115
+ class OuroAttention(nn.Module):
116
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
117
+
118
+ def __init__(self, config: OuroConfig, layer_idx: int):
119
+ super().__init__()
120
+ self.config = config
121
+ self.layer_idx = layer_idx
122
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
123
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
124
+ self.scaling = self.head_dim**-0.5
125
+ self.attention_dropout = config.attention_dropout
126
+ self.is_causal = True
127
+ self.q_proj = nn.Linear(config.hidden_size, config.num_attention_heads * self.head_dim, bias=False)
128
+ self.k_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=False)
129
+ self.v_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=False)
130
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=False)
131
+ self.sliding_window = config.sliding_window if config.layer_types[layer_idx] == "sliding_attention" else None
132
+
133
+ def forward(
134
+ self,
135
+ hidden_states: torch.Tensor,
136
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
137
+ attention_mask: Optional[torch.Tensor],
138
+ past_key_value: Optional[Cache] = None,
139
+ cache_position: Optional[torch.LongTensor] = None,
140
+ current_ut: int = 0,
141
+ **kwargs: Unpack[FlashAttentionKwargs],
142
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
143
+ input_shape = hidden_states.shape[:-1]
144
+ hidden_shape = (*input_shape, -1, self.head_dim)
145
+
146
+ query_states = self.q_proj(hidden_states).view(hidden_shape).transpose(1, 2)
147
+ key_states = self.k_proj(hidden_states).view(hidden_shape).transpose(1, 2)
148
+ value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
149
+
150
+ cos, sin = position_embeddings
151
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
152
+
153
+ if past_key_value is not None:
154
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
155
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
156
+ key_states, value_states = past_key_value.update(key_states, value_states, current_ut * self.config.num_hidden_layers + self.layer_idx, cache_kwargs)
157
+
158
+ attention_interface: Callable = eager_attention_forward
159
+ if self.config._attn_implementation != "eager":
160
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
161
+
162
+ attn_output, attn_weights = attention_interface(
163
+ self,
164
+ query_states,
165
+ key_states,
166
+ value_states,
167
+ attention_mask,
168
+ dropout=0.0 if not self.training else self.attention_dropout,
169
+ scaling=self.scaling,
170
+ sliding_window=self.sliding_window, # main diff with Llama
171
+ **kwargs,
172
+ )
173
+
174
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
175
+ attn_output = self.o_proj(attn_output)
176
+ return attn_output, attn_weights
177
+
178
+
179
+ @use_kernel_forward_from_hub("RMSNorm")
180
+ class OuroRMSNorm(nn.Module):
181
+ def __init__(self, hidden_size, eps=1e-6):
182
+ """
183
+ OuroRMSNorm is equivalent to T5LayerNorm
184
+ """
185
+ super().__init__()
186
+ self.weight = nn.Parameter(torch.ones(hidden_size))
187
+ self.variance_epsilon = eps
188
+
189
+ def forward(self, hidden_states):
190
+ input_dtype = hidden_states.dtype
191
+ hidden_states = hidden_states.to(torch.float32)
192
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
193
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
194
+ return self.weight * hidden_states.to(input_dtype)
195
+
196
+ def extra_repr(self):
197
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
198
+
199
+
200
+ class OuroDecoderLayer(GradientCheckpointingLayer):
201
+ def __init__(self, config: OuroConfig, layer_idx: int):
202
+ super().__init__()
203
+ self.hidden_size = config.hidden_size
204
+
205
+ self.self_attn = OuroAttention(config=config, layer_idx=layer_idx)
206
+
207
+ self.mlp = OuroMLP(config)
208
+ self.input_layernorm = OuroRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
209
+ self.input_layernorm_2 = OuroRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
210
+ self.post_attention_layernorm = OuroRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
211
+ self.post_attention_layernorm_2 = OuroRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
212
+ self.attention_type = config.layer_types[layer_idx]
213
+
214
+ def forward(
215
+ self,
216
+ hidden_states: torch.Tensor,
217
+ attention_mask: Optional[torch.Tensor] = None,
218
+ position_ids: Optional[torch.LongTensor] = None,
219
+ past_key_value: Optional[Cache] = None,
220
+ use_cache: Optional[bool] = False,
221
+ cache_position: Optional[torch.LongTensor] = None,
222
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
223
+ **kwargs: Unpack[TransformersKwargs],
224
+ ) -> tuple[torch.Tensor]:
225
+ residual = hidden_states
226
+ hidden_states = self.input_layernorm(hidden_states)
227
+ # Self Attention
228
+ hidden_states, _ = self.self_attn(
229
+ hidden_states=hidden_states,
230
+ attention_mask=attention_mask,
231
+ position_ids=position_ids,
232
+ past_key_value=past_key_value,
233
+ use_cache=use_cache,
234
+ cache_position=cache_position,
235
+ position_embeddings=position_embeddings,
236
+ **kwargs,
237
+ )
238
+ hidden_states = self.input_layernorm_2(hidden_states)
239
+ hidden_states = residual + hidden_states
240
+
241
+ # Fully Connected
242
+ residual = hidden_states
243
+ hidden_states = self.post_attention_layernorm(hidden_states)
244
+ hidden_states = self.mlp(hidden_states)
245
+ hidden_states = self.post_attention_layernorm_2(hidden_states)
246
+ hidden_states = residual + hidden_states
247
+ return hidden_states
248
+
249
+
250
+ @auto_docstring
251
+ class OuroPreTrainedModel(PreTrainedModel):
252
+ config: OuroConfig
253
+ base_model_prefix = "model"
254
+ supports_gradient_checkpointing = True
255
+ _no_split_modules = ["OuroDecoderLayer"]
256
+ _skip_keys_device_placement = ["past_key_values"]
257
+ _supports_flash_attn = True
258
+ _supports_sdpa = True
259
+ _supports_flex_attn = True
260
+
261
+ _can_compile_fullgraph = True
262
+ _supports_attention_backend = True
263
+ _can_record_outputs = {
264
+ "hidden_states": OuroDecoderLayer,
265
+ "attentions": OuroAttention,
266
+ }
267
+
268
+
269
+ class OuroRotaryEmbedding(nn.Module):
270
+ def __init__(self, config: OuroConfig, device=None):
271
+ super().__init__()
272
+ # BC: "rope_type" was originally "type"
273
+ if hasattr(config, "rope_scaling") and isinstance(config.rope_scaling, dict):
274
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
275
+ else:
276
+ self.rope_type = "default"
277
+ self.max_seq_len_cached = config.max_position_embeddings
278
+ self.original_max_seq_len = config.max_position_embeddings
279
+
280
+ self.config = config
281
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
282
+
283
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
284
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
285
+ self.original_inv_freq = self.inv_freq
286
+
287
+ @torch.no_grad()
288
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
289
+ def forward(self, x, position_ids):
290
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
291
+ position_ids_expanded = position_ids[:, None, :].float()
292
+
293
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
294
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
295
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
296
+ emb = torch.cat((freqs, freqs), dim=-1)
297
+ cos = emb.cos() * self.attention_scaling
298
+ sin = emb.sin() * self.attention_scaling
299
+
300
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
301
+
302
+
303
+ @auto_docstring
304
+ class OuroModel(OuroPreTrainedModel):
305
+ def __init__(self, config: OuroConfig):
306
+ super().__init__(config)
307
+ self.padding_idx = config.pad_token_id
308
+ self.vocab_size = config.vocab_size
309
+
310
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
311
+ self.layers = nn.ModuleList(
312
+ [OuroDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
313
+ )
314
+ self.norm = OuroRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
315
+ self.rotary_emb = OuroRotaryEmbedding(config=config)
316
+ self.gradient_checkpointing = False
317
+ self.has_sliding_layers = "sliding_attention" in self.config.layer_types
318
+ self.total_ut_steps = getattr(self.config, "total_ut_steps", 4)
319
+ self.early_exit_gate = nn.Linear(config.hidden_size, 1)
320
+ # Initialize weights and apply final processing
321
+ self.post_init()
322
+
323
+ @check_model_inputs
324
+ @auto_docstring
325
+ def forward(
326
+ self,
327
+ input_ids: Optional[torch.LongTensor] = None,
328
+ attention_mask: Optional[torch.Tensor] = None,
329
+ position_ids: Optional[torch.LongTensor] = None,
330
+ past_key_values: Optional[Cache] = None,
331
+ inputs_embeds: Optional[torch.FloatTensor] = None,
332
+ use_cache: Optional[bool] = None,
333
+ cache_position: Optional[torch.LongTensor] = None,
334
+ **kwargs: Unpack[TransformersKwargs],
335
+ ) -> BaseModelOutputWithPast:
336
+ if (input_ids is None) ^ (inputs_embeds is not None):
337
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
338
+
339
+ if inputs_embeds is None:
340
+ inputs_embeds = self.embed_tokens(input_ids)
341
+
342
+ if use_cache and past_key_values is None:
343
+ past_key_values = DynamicCache()
344
+
345
+ if cache_position is None:
346
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
347
+ cache_position = torch.arange(
348
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
349
+ )
350
+
351
+ if position_ids is None:
352
+ position_ids = cache_position.unsqueeze(0)
353
+
354
+ # It may already have been prepared by e.g. `generate`
355
+ if not isinstance(causal_mask_mapping := attention_mask, dict):
356
+ # Prepare mask arguments
357
+ mask_kwargs = {
358
+ "config": self.config,
359
+ "input_embeds": inputs_embeds,
360
+ "attention_mask": attention_mask,
361
+ "cache_position": cache_position,
362
+ "past_key_values": past_key_values,
363
+ "position_ids": position_ids,
364
+ }
365
+ # Create the masks
366
+ causal_mask_mapping = {
367
+ "full_attention": create_causal_mask(**mask_kwargs),
368
+ }
369
+ # The sliding window alternating layers are not always activated depending on the config
370
+ if self.has_sliding_layers:
371
+ causal_mask_mapping["sliding_attention"] = create_sliding_window_causal_mask(**mask_kwargs)
372
+
373
+ hidden_states = inputs_embeds
374
+
375
+ # create position embeddings to be shared across the decoder layers
376
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
377
+ hidden_states_list = []
378
+ gate_list = []
379
+
380
+ for current_ut in range(self.total_ut_steps):
381
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
382
+ hidden_states = decoder_layer(
383
+ hidden_states,
384
+ attention_mask=causal_mask_mapping[decoder_layer.attention_type],
385
+ position_ids=position_ids,
386
+ past_key_value=past_key_values,
387
+ use_cache=use_cache,
388
+ cache_position=cache_position,
389
+ position_embeddings=position_embeddings,
390
+ current_ut=current_ut,
391
+ **kwargs,
392
+ )
393
+
394
+ hidden_states = self.norm(hidden_states)
395
+ hidden_states_list.append(hidden_states)
396
+ gate_list.append(self.early_exit_gate(hidden_states))
397
+
398
+ return BaseModelOutputWithPast(
399
+ last_hidden_state=hidden_states,
400
+ past_key_values=past_key_values if use_cache else None,
401
+ ), hidden_states_list, gate_list
402
+
403
+
404
+ @auto_docstring
405
+ class OuroForCausalLM(OuroPreTrainedModel, GenerationMixin):
406
+ _tied_weights_keys = ["lm_head.weight"]
407
+ _tp_plan = {"lm_head": "colwise_rep"}
408
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
409
+
410
+ def __init__(self, config):
411
+ super().__init__(config)
412
+ self.model = OuroModel(config)
413
+ self.vocab_size = config.vocab_size
414
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
415
+
416
+ # 分块大小配置
417
+ self.chunk_size = getattr(config, 'chunk_size', 2) # 默认分块大小为2
418
+ self.early_exit_step = getattr(config, "early_exit_step", None)
419
+ self.early_exit_threshold = getattr(config, "early_exit_threshold", None)
420
+
421
+
422
+ # Initialize weights and apply final processing
423
+ self.post_init()
424
+
425
+ def set_decoder(self, decoder):
426
+ self.model = decoder
427
+
428
+ def get_decoder(self):
429
+ return self.model
430
+
431
+ @can_return_tuple
432
+ @auto_docstring
433
+ def forward(
434
+ self,
435
+ input_ids: Optional[torch.LongTensor] = None,
436
+ attention_mask: Optional[torch.Tensor] = None,
437
+ position_ids: Optional[torch.LongTensor] = None,
438
+ past_key_values: Optional[Cache] = None,
439
+ inputs_embeds: Optional[torch.FloatTensor] = None,
440
+ labels: Optional[torch.LongTensor] = None,
441
+ use_cache: Optional[bool] = None,
442
+ cache_position: Optional[torch.LongTensor] = None,
443
+ logits_to_keep: Union[int, torch.Tensor] = 0,
444
+ use_weighted_exit: Optional[bool] = False, # 控制是否使用加权 early exit
445
+ exit_at_step: Optional[int] = None,
446
+ exit_threshold: Optional[float] = None,
447
+ **kwargs: Unpack[TransformersKwargs],
448
+ ) -> CausalLMOutputWithPast:
449
+ r"""
450
+ Args:
451
+ use_weighted_exit (`bool`, *optional*, defaults to `False`):
452
+ Whether to use weighted early exit. If `True`, the logits from all UT steps will be
453
+ averaged according to the exit probability distribution.
454
+ exit_at_step (`int`, *optional*):
455
+ Specifies which UT step to exit at. If set, the model will directly use the hidden states
456
+ from this step to generate logits, ignoring other exit strategies.
457
+ exit_threshold (`float`, *optional*):
458
+ The cumulative probability threshold for early exit. When the cumulative exit probability
459
+ reaches this threshold, the model will exit at that step.
460
+
461
+ Example:
462
+
463
+ ```python
464
+ >>> from transformers import AutoTokenizer, OuroForCausalLM
465
+
466
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
467
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
468
+
469
+ >>> # Generate
470
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
471
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
472
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
473
+ ```"""
474
+ exit_at_step = exit_at_step if exit_at_step is not None else self.early_exit_step
475
+ exit_threshold = exit_threshold if exit_threshold is not None else self.early_exit_threshold
476
+
477
+ outputs, hidden_states_list, gate_list = self.model(
478
+ input_ids=input_ids,
479
+ attention_mask=attention_mask,
480
+ position_ids=position_ids,
481
+ past_key_values=past_key_values,
482
+ inputs_embeds=inputs_embeds,
483
+ use_cache=use_cache,
484
+ cache_position=cache_position,
485
+ **kwargs,
486
+ )
487
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
488
+
489
+ def _select_token_positions(tensor: torch.Tensor) -> torch.Tensor:
490
+ if isinstance(slice_indices, slice):
491
+ return tensor[:, slice_indices, ...]
492
+ if isinstance(slice_indices, torch.Tensor):
493
+ return tensor.index_select(1, slice_indices.to(tensor.device))
494
+ raise TypeError(f"Unsupported index type for logits_to_keep: {type(slice_indices)}")
495
+
496
+ stacked_exit_pdf = None
497
+ if gate_list:
498
+ pdf_list = []
499
+ remaining_prob = torch.ones_like(gate_list[0].squeeze(-1))
500
+ for idx, gate_tensor in enumerate(gate_list):
501
+ lambda_i = torch.sigmoid(gate_tensor.squeeze(-1))
502
+ if idx < len(gate_list) - 1:
503
+ p_i = lambda_i * remaining_prob
504
+ remaining_prob = remaining_prob * (1.0 - lambda_i)
505
+ else:
506
+ p_i = remaining_prob
507
+ pdf_list.append(p_i)
508
+ stacked_exit_pdf = torch.stack(pdf_list, dim=2)
509
+
510
+ expected_logits_cache: Optional[torch.Tensor] = None
511
+
512
+ def compute_expected_logits() -> Optional[torch.Tensor]:
513
+ nonlocal expected_logits_cache
514
+ if expected_logits_cache is not None:
515
+ return expected_logits_cache
516
+ if stacked_exit_pdf is None or not hidden_states_list:
517
+ return None
518
+ token_exit_pdf = _select_token_positions(stacked_exit_pdf)
519
+ expected_logits = None
520
+ for step_idx, hidden in enumerate(hidden_states_list):
521
+ step_hidden = _select_token_positions(hidden)
522
+ step_logits = self.lm_head(step_hidden)
523
+ weight = token_exit_pdf[..., step_idx].unsqueeze(-1).to(step_logits.dtype)
524
+ expected_logits = step_logits * weight if expected_logits is None else expected_logits + step_logits * weight
525
+ expected_logits_cache = expected_logits
526
+ return expected_logits_cache
527
+
528
+ logits: Optional[torch.Tensor] = None
529
+ loss: Optional[torch.Tensor] = None
530
+
531
+ if labels is not None:
532
+ logits = compute_expected_logits()
533
+ if logits is None:
534
+ hidden_states = outputs.last_hidden_state
535
+ logits = self.lm_head(_select_token_positions(hidden_states))
536
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
537
+ else:
538
+ if stacked_exit_pdf is not None and hidden_states_list:
539
+ if exit_at_step is not None and 0 <= exit_at_step < len(hidden_states_list):
540
+ selected_hidden = hidden_states_list[exit_at_step]
541
+ logits = self.lm_head(_select_token_positions(selected_hidden))
542
+ elif exit_threshold is not None:
543
+ cumulative_probs = torch.cumsum(stacked_exit_pdf, dim=2)
544
+ threshold_value = exit_threshold
545
+ if isinstance(threshold_value, torch.Tensor):
546
+ threshold_value = threshold_value.to(cumulative_probs.device)
547
+ threshold_mask = cumulative_probs >= threshold_value
548
+ exit_steps = torch.argmax(threshold_mask.float(), dim=2)
549
+ last_step_idx = stacked_exit_pdf.shape[2] - 1
550
+ if last_step_idx >= 0:
551
+ never_exceeded = ~threshold_mask.any(dim=2)
552
+ exit_steps[never_exceeded] = last_step_idx
553
+ stacked_hidden = torch.stack(hidden_states_list, dim=2)
554
+ gather_index = exit_steps.unsqueeze(-1).unsqueeze(-1).expand(-1, -1, 1, stacked_hidden.size(-1))
555
+ final_hidden_states = torch.gather(stacked_hidden, 2, gather_index).squeeze(2)
556
+ logits = self.lm_head(_select_token_positions(final_hidden_states))
557
+ elif use_weighted_exit:
558
+ logits = compute_expected_logits()
559
+
560
+ if logits is None:
561
+ hidden_states = outputs.last_hidden_state
562
+ logits = self.lm_head(_select_token_positions(hidden_states))
563
+
564
+ result = CausalLMOutputWithPast(
565
+ loss=loss,
566
+ logits=logits,
567
+ past_key_values=outputs.past_key_values,
568
+ hidden_states=outputs.hidden_states,
569
+ attentions=outputs.attentions,
570
+ )
571
+
572
+ return result
573
+
574
+
575
+ class OuroForSequenceClassification(GenericForSequenceClassification, OuroPreTrainedModel):
576
+ pass
577
+
578
+
579
+ class OuroForTokenClassification(GenericForTokenClassification, OuroPreTrainedModel):
580
+ pass
581
+
582
+
583
+ class OuroForQuestionAnswering(GenericForQuestionAnswering, OuroPreTrainedModel):
584
+ base_model_prefix = "transformer" # For BC, where `transformer` was used instead of `model`
585
+
586
+
587
+ __all__ = [
588
+ "OuroPreTrainedModel",
589
+ "OuroModel",
590
+ "OuroForCausalLM",
591
+ "OuroForSequenceClassification",
592
+ "OuroForTokenClassification",
593
+ "OuroForQuestionAnswering",
594
+ ]
special_tokens_map.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<|im_start|>",
5
+ "<|im_end|>",
6
+ "<think>",
7
+ "</think>",
8
+ "<file_sep>",
9
+ "<filename>",
10
+ "<gh_stars>",
11
+ "<issue_start>",
12
+ "<issue_comment>",
13
+ "<issue_closed>",
14
+ "<jupyter_start>",
15
+ "<jupyter_text>",
16
+ "<jupyter_code>",
17
+ "<jupyter_output>",
18
+ "<jupyter_script>",
19
+ "<empty_output>"
20
+ ],
21
+ "bos_token": {
22
+ "content": "<|endoftext|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ },
28
+ "eos_token": {
29
+ "content": "<|im_end|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false
34
+ },
35
+ "pad_token": {
36
+ "content": "<|im_end|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false
41
+ },
42
+ "unk_token": {
43
+ "content": "<|endoftext|>",
44
+ "lstrip": false,
45
+ "normalized": false,
46
+ "rstrip": false,
47
+ "single_word": false
48
+ }
49
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<think>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "4": {
37
+ "content": "</think>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "5": {
45
+ "content": "<file_sep>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "6": {
53
+ "content": "<filename>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "7": {
61
+ "content": "<gh_stars>",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "8": {
69
+ "content": "<issue_start>",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "9": {
77
+ "content": "<issue_comment>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "10": {
85
+ "content": "<issue_closed>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "11": {
93
+ "content": "<jupyter_start>",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "12": {
101
+ "content": "<jupyter_text>",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "13": {
109
+ "content": "<jupyter_code>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "14": {
117
+ "content": "<jupyter_output>",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": true
123
+ },
124
+ "15": {
125
+ "content": "<jupyter_script>",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": true
131
+ },
132
+ "16": {
133
+ "content": "<empty_output>",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": true
139
+ }
140
+ },
141
+ "additional_special_tokens": [
142
+ "<|endoftext|>",
143
+ "<|im_start|>",
144
+ "<|im_end|>",
145
+ "<think>",
146
+ "</think>",
147
+ "<file_sep>",
148
+ "<filename>",
149
+ "<gh_stars>",
150
+ "<issue_start>",
151
+ "<issue_comment>",
152
+ "<issue_closed>",
153
+ "<jupyter_start>",
154
+ "<jupyter_text>",
155
+ "<jupyter_code>",
156
+ "<jupyter_output>",
157
+ "<jupyter_script>",
158
+ "<empty_output>"
159
+ ],
160
+ "bos_token": "<|endoftext|>",
161
+ "clean_up_tokenization_spaces": false,
162
+ "chat_template": "{%- if messages[0]['role'] == 'system' -%}{{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}{%- else -%}{{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}{%- endif -%}{%- for message in messages -%}{%- if message.role == 'system' and loop.first -%}{# Skip #}{%- else -%}{{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>' + '\\n' }}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{- '<|im_start|>assistant\\n' }}{%- endif -%}",
163
+ "eos_token": "<|endoftext|>",
164
+ "extra_special_tokens": {},
165
+ "model_max_length": 131072,
166
+ "tokenizer_class": "GPT2Tokenizer",
167
+ "unk_token": "<|endoftext|>",
168
+ "vocab_size": 49152
169
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff