--- license: cc-by-sa-4.0 language: - th tags: - speech-recognition configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: sentence dtype: string - name: speaker_id dtype: string - name: mic dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 8212128894.78 num_examples: 120245 - name: validation num_bytes: 1296622162.01 num_examples: 13090 - name: test num_bytes: 1623791447.32 num_examples: 27580 download_size: 13180732521 dataset_size: 11132542504.109999 --- # LOTUSDIS ## Dataset Description ## How to use You can easily load the dataset using the 🤗 `datasets` library. The dataset can be loaded and prepared with a single line of Python code: ```python from datasets import load_dataset lotus_dis = load_dataset("nectec/LOTUSDIS", split="train") ``` To iterate through the dataset without downloading it entirely, you can use streaming mode: ```python from datasets import load_dataset lotus_dis = load_dataset("nectec/LOTUSDIS", split="train", streaming=True) print(next(iter(lotus_dis))) ``` Learn more about how to load and prepare audio datasets in the [Hugging Face Audio Datasets tutorial](https://huggingface.co/blog/audio-datasets). Full meeting session resources: - Audio files: [Download here](https://drive.google.com/file/d/1ofw99Y5W1p8f1DSaIbJkS0xWtuTI2Hrc/view) - Annotation files (TextGrid): [Download here](https://drive.google.com/file/d/14fMv_X_8sGDPGbnU-hpJ85Mug43AHlgO/view) ## Citation ``` @misc{tipaksorn2025lotusdisthaifarfieldmeeting, title={LOTUSDIS: A Thai far-field meeting corpus for robust conversational ASR}, author={Pattara Tipaksorn and Sumonmas Thatphithakkul and Vataya Chunwijitra and Kwanchiva Thangthai}, year={2025}, eprint={2509.18722}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2509.18722}, } ```