Add usage instructions and colab link
Browse files
README.md
CHANGED
|
@@ -47,6 +47,22 @@ Note that Whisper's normalization has major issues for languages like Malayalam,
|
|
| 47 |
With normalization (for a fair comparison with other models on this platform), the results are instead:
|
| 48 |
- WER: 11.49
|
| 49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
## Model description
|
| 51 |
|
| 52 |
More information needed
|
|
|
|
| 47 |
With normalization (for a fair comparison with other models on this platform), the results are instead:
|
| 48 |
- WER: 11.49
|
| 49 |
|
| 50 |
+
[This Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb) can be used as a starting point to further finetune the model.
|
| 51 |
+
## Usage instructions
|
| 52 |
+
Given an audio sample `audio` (this can be anything from a numpy array to a filepath), the following code generates transcriptions:
|
| 53 |
+
```python
|
| 54 |
+
from transformers import pipeline, WhisperProcessor
|
| 55 |
+
|
| 56 |
+
processor = WhisperProcessor.from_pretrained("thennal/whisper-medium-ml")
|
| 57 |
+
forced_decoder_ids = processor.get_decoder_prompt_ids(language="ml", task="transcribe")
|
| 58 |
+
asr = pipeline(
|
| 59 |
+
"automatic-speech-recognition", model="thennal/whisper-medium-ml", device=0,
|
| 60 |
+
)
|
| 61 |
+
transcription = asr(audio, chunk_length_s=30, max_new_tokens=448, return_timestamps=False, generate_kwargs={
|
| 62 |
+
"forced_decoder_ids": forced_decoder_ids,
|
| 63 |
+
"do_sample": True,
|
| 64 |
+
})
|
| 65 |
+
```
|
| 66 |
## Model description
|
| 67 |
|
| 68 |
More information needed
|