yhytoto12 nielsr HF Staff commited on
Commit
637127c
·
verified ·
1 Parent(s): f983305

Improve model card for ReVerT (Think, Verbalize, then Speak) (#1)

Browse files

- Improve model card for ReVerT (Think, Verbalize, then Speak) (86aba8ddaacd2b47e8424098328196d19afd50c1)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +82 -146
README.md CHANGED
@@ -1,199 +1,135 @@
1
  ---
2
  library_name: transformers
3
  tags: []
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
 
 
 
 
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
 
 
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
 
76
- ## Training Details
 
 
 
 
 
 
77
 
78
- ### Training Data
 
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
 
 
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
 
 
 
 
 
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
 
 
92
 
93
- #### Training Hyperparameters
 
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
- #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
 
 
 
100
 
101
- [More Information Needed]
 
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
  tags: []
4
+ pipeline_tag: text-generation
5
+ license: other
6
  ---
7
 
8
+ # Model Card for ReVerT (Think, Verbalize, then Speak)
 
 
 
9
 
10
+ This model implements the **ReVerT** verbalizer, a core component of the **Think-Verbalize-Speak (TVS)** framework, introduced in the paper [Think, Verbalize, then Speak: Bridging Complex Thoughts and Comprehensible Speech](https://huggingface.co/papers/2509.16028).
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
+ Spoken dialogue systems increasingly employ large language models (LLMs) to leverage their advanced reasoning capabilities. However, direct application of LLMs in spoken communication often yields suboptimal results due to mismatches between optimal textual and verbal delivery. While existing approaches adapt LLMs to produce speech-friendly outputs, their impact on reasoning performance remains underexplored.
 
 
 
 
 
 
 
 
 
 
17
 
18
+ The **Think-Verbalize-Speak** framework decouples reasoning from spoken delivery to preserve the full reasoning capacity of LLMs. Central to this method is **verbalizing**, an intermediate step that translates complex thoughts into natural, speech-ready text. This model, **ReVerT**, is a latency-efficient verbalizer based on incremental and asynchronous summarization. Experiments across multiple benchmarks show that this method enhances speech naturalness and conciseness with minimal impact on reasoning.
19
 
20
+ - **Developed by:** Sang Hoon Woo, Sehun Lee, Kang-wook Kim, Gunhee Kim
21
+ - **Model type:** Qwen2ForCausalLM fine-tuned as a verbalizer for text generation.
22
+ - **Language(s) (NLP):** English
23
+ - **License:** No explicit license found in the provided sources, please refer to the original project for license information.
24
+ - **Finetuned from model:** Qwen/Qwen2.5-3B-Instruct
25
 
26
+ ### Model Sources
 
 
27
 
28
+ - **Repository:** https://github.com/yhytoto12/TVS-ReVerT
29
+ - **Paper:** https://huggingface.co/papers/2509.16028
30
+ - **Project Page:** https://yhytoto12.github.io/TVS-ReVerT
31
 
32
+ ## 💥 News
33
 
34
+ - `2025.09.22` 🚀 We released our paper on [arXiv](https://arxiv.org/abs/2509.16028).
35
+ - `2025.09.19` 🔥 We released the training code, datasets, models, and interactive demo.
36
+ - `2025.08.21` 🎉 Our paper got accepted to **EMNLP 2025**!
37
 
38
+ ## 👀 Introduction
39
 
40
+ <p align="center">
41
+ <img src="https://github.com/yhytoto12/TVS-ReVerT/raw/main/assets/tvs-framework.png" width="100%"> <br>
42
+ </p>
43
 
44
+ ## Uses
45
 
46
+ ### Direct Use
47
 
48
+ This model is intended to be used as a "verbalizer" within a spoken dialogue system. Its primary purpose is to convert complex, often structured, "thoughts" generated by a Large Language Model into natural, concise, and speech-ready text that can then be fed into a Text-to-Speech (TTS) system. This ensures that the full reasoning capacity of the LLM is preserved while optimizing the output for verbal delivery.
49
 
50
  ### Out-of-Scope Use
51
 
52
+ This model is not designed for direct end-to-end reasoning or speech synthesis. It specifically focuses on the text-to-text verbalization step. It should not be used as a standalone reasoning engine, nor should its outputs be directly consumed by users without further processing (e.g., TTS).
 
 
53
 
54
  ## Bias, Risks, and Limitations
55
 
56
+ - The model's performance and potential biases are influenced by the underlying base LLM (Qwen2.5) and the characteristics of the training datasets (GSM8k, 2WikiMultihopQA).
57
+ - While designed for naturalness and conciseness, the quality of verbalization might vary depending on the complexity and domain of the input "thoughts."
58
+ - The model's effectiveness is contingent on its integration into a larger Think-Verbalize-Speak framework, including a robust "Think" model and a speech synthesizer.
59
 
60
  ### Recommendations
61
 
62
+ Users should be aware of these limitations and consider the potential for biases inherited from the training data and base models. Thorough evaluation in target deployment scenarios is recommended, especially for sensitive applications.
 
 
63
 
64
  ## How to Get Started with the Model
65
 
66
+ You can try the interactive demo for the Think-Verbalize-Speak framework, which utilizes this ReVerT verbalizer. The setup instructions from the GitHub repository are provided below.
 
 
67
 
68
+ First, set up the environment:
69
+ ```bash
70
+ git clone https://github.com/yhytoto12/TVS-ReVerT.git
71
+ cd TVS-ReVerT
72
+ conda create -n tvs python=3.10
73
+ conda activate tvs
74
+ pip install -r requirements.txt
75
 
76
+ # Use flash attention for faster training and inference (optional)
77
+ pip install -U flash-attn --no-build-isolation
78
 
79
+ # For deepspeed training (optional)
80
+ pip install deepspeed
81
+ ```
82
 
83
+ Then, run the interactive demo using one of the following commands:
84
 
85
+ * **Using OpenAI models as the Think model:**
86
+ ```bash
87
+ python demo.py --think_model <openai_model_name> --verbalize_model yhytoto12/revert-Qwen2.5-3B --use_openai_think
88
+ ```
89
 
90
+ * **Using local models as the Think model (with vLLM backend):**
91
+ First, start the vLLM backend in one terminal:
92
+ ```bash
93
+ python -m vllm.entrypoints.transformers --model Qwen/Qwen2.5-7B-Instruct --host 0.0.0.0 --port 8000
94
+ ```
95
+ Then, run the demo in a separate terminal:
96
+ ```bash
97
+ python demo.py --think_model Qwen/Qwen2.5-7B-Instruct --verbalize_model yhytoto12/revert-Qwen2.5-3B --vllm_url http://localhost:8000/v1
98
+ ```
99
 
100
+ ## Training Details
101
 
102
+ ### Training Data
103
 
104
+ The ReVerT verbalizer models were trained using specialized datasets containing thought-verbalization pairs. These datasets are available on Hugging Face:
105
 
106
+ - [🤗 **GSM8k**](https://huggingface.co/datasets/yhytoto12/tvs-gsm8k)
107
+ - [🤗 **2WikiMultihopQA**](https://huggingface.co/datasets/yhytoto12/tvs-2wikimultihopqa)
108
 
109
+ ### Training Procedure
110
 
111
+ Training scripts for the various models discussed in the paper, including the ReVerT verbalizer, are provided in the [GitHub repository](https://github.com/yhytoto12/TVS-ReVerT) under the `scripts/` directory. The default base model for training is `Qwen/Qwen2.5-3B-Instruct`, which can be modified within the training scripts.
112
 
113
+ Example script for training the TVS(ReVerT) Model:
114
+ ```bash
115
+ bash scripts/train_tvs_revert.sh -g <num_gpus>
116
+ ```
117
 
118
+ #### Training Hyperparameters
119
+ Specific training hyperparameters would be found within the `scripts/train_tvs_revert.sh` script and associated configuration files in the GitHub repository.
120
 
121
  ## Evaluation
122
 
123
+ The paper details experiments across multiple benchmarks showing that the Think-Verbalize-Speak method, including ReVerT, enhances speech naturalness and conciseness with minimal impact on reasoning performance. Refer to the [paper](https://huggingface.co/papers/2509.16028) for comprehensive evaluation protocols and results.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
 
125
+ ## Citation
126
 
127
+ If you find our project useful for your research and applications, please kindly cite using this BibTeX:
128
+ ```bibtex
129
+ @inproceedings{tvs2025@woolee,
130
+ title={Think, Verbalize, then Speak: Bridging Complex Thoughts and Comprehensible Speech},
131
+ author={Sang Hoon Woo, Sehun Lee, Kang-wook Kim, Gunhee Kim},
132
+ booktitle={Proceedings of the EMNLP 2025},
133
+ year={2025}
134
+ }
135
+ ```