docs: update README
Browse files
README.md
CHANGED
|
@@ -14,7 +14,8 @@ tags:
|
|
| 14 |
## Overview
|
| 15 |
This hub features the pre-trained model by [DiariZen](https://github.com/BUTSpeechFIT/DiariZen). The EEND component is built upon WavLM Large and Conformer layers. The model was trained on far-field, single-channel audio from a diverse set of public datasets, including AMI, AISHELL-4, AliMeeting, NOTSOFAR-1, MSDWild, DIHARD3, RAMC, and VoxConverse.
|
| 16 |
|
| 17 |
-
Then structured pruning at 80% sparsity is applied.
|
|
|
|
| 18 |
|
| 19 |
|
| 20 |
## Usage
|
|
|
|
| 14 |
## Overview
|
| 15 |
This hub features the pre-trained model by [DiariZen](https://github.com/BUTSpeechFIT/DiariZen). The EEND component is built upon WavLM Large and Conformer layers. The model was trained on far-field, single-channel audio from a diverse set of public datasets, including AMI, AISHELL-4, AliMeeting, NOTSOFAR-1, MSDWild, DIHARD3, RAMC, and VoxConverse.
|
| 16 |
|
| 17 |
+
Then structured pruning at 80% sparsity is applied. After pruning, the number of parameters in WavLM Large is reduced from **316.6M to 63.3M**, and the computational cost (MACs) decreases from **17.8G to 3.8G** per second.
|
| 18 |
+
|
| 19 |
|
| 20 |
|
| 21 |
## Usage
|