Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
100K - 1M
ArXiv:
License:
| dataset_info: | |
| features: | |
| - name: sex | |
| dtype: string | |
| - name: subset | |
| dtype: string | |
| - name: id | |
| dtype: string | |
| - name: audio | |
| dtype: audio | |
| - name: transcript | |
| dtype: string | |
| - name: words | |
| list: | |
| - name: end | |
| dtype: float64 | |
| - name: start | |
| dtype: float64 | |
| - name: word | |
| dtype: string | |
| - name: phonemes | |
| list: | |
| - name: end | |
| dtype: float64 | |
| - name: phoneme | |
| dtype: string | |
| - name: start | |
| dtype: float64 | |
| splits: | |
| - name: dev_clean | |
| num_bytes: 365310608.879 | |
| num_examples: 2703 | |
| - name: dev_other | |
| num_bytes: 341143993.784 | |
| num_examples: 2864 | |
| - name: test_clean | |
| num_bytes: 377535532.98 | |
| num_examples: 2620 | |
| - name: test_other | |
| num_bytes: 351207892.569557 | |
| num_examples: 2938 | |
| - name: train_clean_100 | |
| num_bytes: 6694747231.610863 | |
| num_examples: 28538 | |
| - name: train_clean_360 | |
| num_bytes: 24163659711.787865 | |
| num_examples: 104008 | |
| - name: train_other_500 | |
| num_bytes: 32945085271.89443 | |
| num_examples: 148645 | |
| download_size: 62101682957 | |
| dataset_size: 65238690243.50571 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: dev_clean | |
| path: data/dev_clean-* | |
| - split: dev_other | |
| path: data/dev_other-* | |
| - split: test_clean | |
| path: data/test_clean-* | |
| - split: test_other | |
| path: data/test_other-* | |
| - split: train_clean_100 | |
| path: data/train_clean_100-* | |
| - split: train_clean_360 | |
| path: data/train_clean_360-* | |
| - split: train_other_500 | |
| path: data/train_other_500-* | |
| license: cc-by-4.0 | |
| task_categories: | |
| - automatic-speech-recognition | |
| language: | |
| - en | |
| pretty_name: Librispeech Alignments | |
| size_categories: | |
| - 100K<n<1M | |
| # Dataset Card for Librispeech Alignments | |
| Librispeech with alignments generated by the [Montreal Forced Aligner](https://montreal-forced-aligner.readthedocs.io/en/latest/). The original alignments in TextGrid format can be found [here](https://zenodo.org/records/2619474) | |
| ## Dataset Details | |
| ### Dataset Description | |
| Librispeech is a corpus of read English speech, designed for training and evaluating automatic speech recognition (ASR) systems. The dataset contains 1000 hours of 16kHz read English speech derived from audiobooks. | |
| The Montreal Forced Aligner (MFA) was used to generate word and phoneme level alignments for the Librispeech dataset. | |
| - **Curated by:** Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur (for Librispeech) | |
| - **Funded by:** DARPA LORELEI | |
| - **Shared by:** Loren Lugosch (for Alignments) | |
| - **Language(s) (NLP):** English | |
| - **License:** Creative Commons Attribution 4.0 International License | |
| ### Dataset Sources | |
| - **Repository:** https://www.openslr.org/12 | |
| - **Paper:** https://arxiv.org/abs/1512.02595 | |
| - **Alignments:** https://zenodo.org/record/2619474 | |
| ## Uses | |
| ### Direct Use | |
| The Librispeech dataset can be used to train and evaluate ASR systems. The alignments allow for forced alignment techniques. | |
| ### Out-of-Scope Use | |
| The dataset only contains read speech, so may not perform as well on spontaneous conversational speech. | |
| ## Dataset Structure | |
| The dataset contains 1000 hours of segmented read English speech from audiobooks. There are three train subsets: 100 hours (train-clean-100), 360 hours (train-clean-360) and 500 hours (train-other-500). | |
| The alignments connect the audio to the reference text transcripts on word and phoneme level. | |
| ### Data Fields | |
| - sex: M for male, F for female | |
| - subset: dev_clean, dev_other, test_clean, test_other, train_clean_100, train_clean_360, train_other_500 | |
| - id: unique id of the data sample. (speaker id)-(chapter-id)-(utterance-id) | |
| - audio: the audio, 16kHz | |
| - transcript: the spoken text of the dataset, normalized and lowercased | |
| - words: a list of words with fields: | |
| - word: the text of the word | |
| - start: the start time in seconds | |
| - end: the end time in seconds | |
| - phonemes: a list of phonemes with fields: | |
| - phoneme: the phoneme spoken | |
| - start: the start time in seconds | |
| - end: the end time in seconds | |
| ## Dataset Creation | |
| ### Curation Rationale | |
| Librispeech was created to further speech recognition research and to benchmark progress in the field. | |
| ### Source Data | |
| #### Data Collection and Processing | |
| The audio and reference texts were sourced from read English audiobooks in the LibriVox project. The data was segmented, filtered and prepared for speech recognition. | |
| #### Who are the source data producers? | |
| The audiobooks are read by volunteers for the LibriVox project. Information about the readers is available in the LibriVox catalog. | |
| ### Annotations | |
| #### Annotation process | |
| The Montreal Forced Aligner was used to create word and phoneme level alignments between the audio and reference texts. The aligner is based on Kaldi. | |
| In the process of formatting this into a HuggingFace dataset, words with empty text and phonemes with empty text, silence tokens, or spacing tokens were removed | |
| #### Who are the annotators? | |
| The alignments were generated automatically by the Montreal Forced Aligner and shared by Loren Lugosch. The TextGrid files were parsed and integrated into this dataset by Kim Gilkey. | |
| #### Personal and Sensitive Information | |
| The data contains read speech and transcripts. No personal or sensitive information expected. | |
| ## Bias, Risks, and Limitations | |
| The dataset contains only read speech from published books, not natural conversational speech. Performance on other tasks may be reduced. | |
| ### Recommendations | |
| Users should understand that the alignments may contain errors and account for this in applications. For example, be wary of <UNK> tokens. | |
| ## Citation | |
| **Librispeech:** | |
| ``` | |
| @inproceedings{panayotov2015librispeech, | |
| title={Librispeech: an ASR corpus based on public domain audio books}, | |
| author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, | |
| booktitle={ICASSP}, | |
| year={2015}, | |
| organization={IEEE} | |
| } | |
| ``` | |
| **Librispeech Alignments:** | |
| ``` | |
| Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, and Yoshua Bengio, "Speech Model Pre-training for End-to-End Spoken Language Understanding", Interspeech 2019. | |
| ``` | |
| **Montreal Forced Aligner:** | |
| ``` | |
| Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. "Montreal Forced Aligner: trainable text-speech alignment using Kaldi", Interspeech 2017. | |
| ``` |