--- language: - tw license: cc-by-4.0 task_categories: - automatic-speech-recognition - text-to-speech task_ids: - keyword-spotting multilinguality: - monolingual size_categories: - 1K 1KB (small/corrupted files filtered out) - **Format**: WAV audio files with corresponding text labels - **Modalities**: Audio + Text ### Supported Tasks - **Automatic Speech Recognition (ASR)**: Train models to convert Twi speech to text - **Text-to-Speech (TTS)**: Use parallel data for TTS model development - **Keyword Spotting**: Identify specific Twi words in audio - **Phonetic Analysis**: Study Twi pronunciation patterns ## Dataset Structure ### Data Fields - `audio`: Audio file in WAV format - `text`: Corresponding text transcription ### Data Splits The dataset contains a single training split with 413463 filtered audio files. ### File Structure Each audio segment is stored as a numbered pair: - `NNNN.wav`: Audio file (e.g., `0001.wav`) - `NNNN.txt`: Corresponding text file (e.g., `0001.txt`) This structure ensures clean organization and easy pairing of audio-text data. ## Dataset Creation ### Source Data The audio data has been sourced ethically from consenting contributors. To protect the privacy of the original authors and speakers, specific source information cannot be shared publicly. ### Data Processing 1. Audio files were processed using forced alignment techniques 2. Word-level segmentation was performed with padding to prevent abrupt cuts 3. Audio segments were filtered based on: - Minimum duration requirements - Volume/vocal content thresholds - File size validation (> 1KB) 4. Each valid segment was saved as a numbered audio-text pair 5. Audio processing used the [MMS-300M-1130 Forced Aligner](https://huggingface.co/MahmoudAshraf/mms-300m-1130-forced-aligner) tool for alignment and quality assurance ### Quality Control - Empty or silent audio segments were automatically filtered out - Very short segments (< 200ms) were excluded - Low-volume segments were removed to ensure vocal content - Audio padding (100ms) was added to prevent abrupt word cuts ### Annotations Text annotations are stored in separate `.txt` files corresponding to each audio file, representing the exact spoken content in each audio segment. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contributes to the preservation and digital representation of Twi, supporting: - Language technology development for underrepresented languages - Educational resources for Twi language learning - Cultural preservation through digital archives ### Discussion of Biases - The dataset may reflect the pronunciation patterns and dialects of specific regions or speakers - Audio quality and recording conditions may vary across samples - The vocabulary is limited to the words present in the collected samples ### Other Known Limitations - Limited vocabulary scope (word-level rather than sentence-level) - Potential audio quality variations - Regional dialect representation may be uneven - Automatic filtering may have removed some valid segments ## Additional Information ### Licensing Information This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). ### Acknowledgments - Audio processing and alignment performed using [MMS-300M-1130 Forced Aligner](https://huggingface.co/MahmoudAshraf/mms-300m-1130-forced-aligner) - The original audio is produced by The Ghana Institute of Linguistics, Literacy and Bible Translation in partnership with Davar Partners - Automated quality filtering and padding applied to ensure high-quality audio segments ### Citation Information If you use this dataset in your research, please cite: ``` @dataset{twi_words_parallel_2025, title={Twi Words Speech-Text Parallel Dataset}, year={2025}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/datasets/michsethowusu/twi-words-speech-text-parallel}} } ``` ### Contact For questions or concerns about this dataset, please open an issue in the dataset repository. ## Usage Example ```python from datasets import load_dataset # Load the dataset dataset = load_dataset("michsethowusu/twi-words-speech-text-parallel") # Access audio and text pairs for example in dataset["train"]: audio = example["audio"] text = example["text"] print(f"Text: {text}") print(f"Audio sample rate: {audio['sampling_rate']}") ```