ridger commited on
Commit
d5d2e61
·
verified ·
1 Parent(s): 007a3d0

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +35 -0
  2. config.json +2 -0
  3. configuration_ouro.py +4 -0
README.md CHANGED
@@ -31,6 +31,38 @@ tags:
31
  - **Cross-Step Consistency**: Intermediate recurrent outputs can serve as reliable proxies for final answers
32
  - **Explicit Thinking Process**: Trained to generate detailed reasoning steps
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  ## Model Architecture
35
 
36
  Based on Ouro-2.6B with additional reasoning fine-tuning:
@@ -65,6 +97,7 @@ Based on Ouro-2.6B with additional reasoning fine-tuning:
65
  - **Optimizer**: Adam (lr=2×10⁻⁵, β=(0.9, 0.95))
66
  - **Scheduler**: Cosine decay
67
 
 
68
  ## Quick Start
69
 
70
  **⚠️ IMPORTANT**: Please use `transformers<4.56.0` to avoid compatibility issues. We recommend `transformers==4.54.1` or earlier versions.
@@ -95,6 +128,8 @@ outputs = model.generate(inputs, max_new_tokens=512, temperature=1.0, top_p=0.7)
95
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
96
  ```
97
 
 
 
98
  ## Citation
99
 
100
  ```bibtex
 
31
  - **Cross-Step Consistency**: Intermediate recurrent outputs can serve as reliable proxies for final answers
32
  - **Explicit Thinking Process**: Trained to generate detailed reasoning steps
33
 
34
+ ## Configuration
35
+
36
+ ### Recurrent Steps and Adaptive Exit
37
+
38
+ The model's computational behavior can be configured through the `config.json` file:
39
+
40
+ ```json
41
+ {
42
+ "total_ut_steps": 4,
43
+ "early_exit_threshold": 1.0
44
+ }
45
+ ```
46
+
47
+ - **`total_ut_steps`**: Controls the number of recurrent steps (default: 4). You can adjust this value to trade off between performance and computation time.
48
+ - **`early_exit_threshold`**: Controls the adaptive exit mechanism (default: 1.0). Lower values encourage earlier exit, while 1.0 means always use all steps.
49
+
50
+ **Example: Modify recurrent steps**
51
+ ```python
52
+ from transformers import AutoConfig, AutoModelForCausalLM
53
+
54
+ config = AutoConfig.from_pretrained("ByteDance/Ouro-2.6B-Thinking")
55
+ config.total_ut_steps = 3 # Use 3 recurrent steps instead of 4
56
+ model = AutoModelForCausalLM.from_pretrained(
57
+ "ByteDance/Ouro-2.6B-Thinking",
58
+ config=config,
59
+ device_map="auto"
60
+ )
61
+ ```
62
+
63
+ > **Note**: vLLM does not currently support the adaptive exit feature due to its inference optimization characteristics. When using vLLM, the model will always execute the full number of `total_ut_steps`.
64
+
65
+
66
  ## Model Architecture
67
 
68
  Based on Ouro-2.6B with additional reasoning fine-tuning:
 
97
  - **Optimizer**: Adam (lr=2×10⁻⁵, β=(0.9, 0.95))
98
  - **Scheduler**: Cosine decay
99
 
100
+
101
  ## Quick Start
102
 
103
  **⚠️ IMPORTANT**: Please use `transformers<4.56.0` to avoid compatibility issues. We recommend `transformers==4.54.1` or earlier versions.
 
128
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
129
  ```
130
 
131
+
132
+
133
  ## Citation
134
 
135
  ```bibtex
config.json CHANGED
@@ -78,8 +78,10 @@
78
  "tie_word_embeddings": false,
79
  "torch_dtype": "bfloat16",
80
  "total_ut_steps": 4,
 
81
  "transformers_version": "4.55.0",
82
  "use_cache": true,
83
  "use_sliding_window": false,
84
  "vocab_size": 49152
 
85
  }
 
78
  "tie_word_embeddings": false,
79
  "torch_dtype": "bfloat16",
80
  "total_ut_steps": 4,
81
+ "early_exit_threshold": 1.0,
82
  "transformers_version": "4.55.0",
83
  "use_cache": true,
84
  "use_sliding_window": false,
85
  "vocab_size": 49152
86
+
87
  }
configuration_ouro.py CHANGED
@@ -169,6 +169,8 @@ class OuroConfig(PretrainedConfig):
169
  max_window_layers=28,
170
  layer_types=None,
171
  attention_dropout=0.0,
 
 
172
  **kwargs,
173
  ):
174
  self.vocab_size = vocab_size
@@ -193,6 +195,8 @@ class OuroConfig(PretrainedConfig):
193
  self.rope_theta = rope_theta
194
  self.rope_scaling = rope_scaling
195
  self.attention_dropout = attention_dropout
 
 
196
  # Validate the correctness of rotary position embeddings parameters
197
  # BC: if there is a 'type' field, move it to 'rope_type'.
198
  if self.rope_scaling is not None and "type" in self.rope_scaling:
 
169
  max_window_layers=28,
170
  layer_types=None,
171
  attention_dropout=0.0,
172
+ total_ut_steps=4,
173
+ early_exit_threshold=1.0,
174
  **kwargs,
175
  ):
176
  self.vocab_size = vocab_size
 
195
  self.rope_theta = rope_theta
196
  self.rope_scaling = rope_scaling
197
  self.attention_dropout = attention_dropout
198
+ self.total_ut_steps = total_ut_steps
199
+ self.early_exit_threshold = early_exit_threshold
200
  # Validate the correctness of rotary position embeddings parameters
201
  # BC: if there is a 'type' field, move it to 'rope_type'.
202
  if self.rope_scaling is not None and "type" in self.rope_scaling: