| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						"""SUPERB: Speech processing Universal PERformance Benchmark.""" | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						import csv | 
					
					
						
						| 
							 | 
						import glob | 
					
					
						
						| 
							 | 
						import os | 
					
					
						
						| 
							 | 
						import textwrap | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						import datasets | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						_CITATION = """\ | 
					
					
						
						| 
							 | 
						@article{DBLP:journals/corr/abs-2105-01051, | 
					
					
						
						| 
							 | 
						  author    = {Shu{-}Wen Yang and | 
					
					
						
						| 
							 | 
						               Po{-}Han Chi and | 
					
					
						
						| 
							 | 
						               Yung{-}Sung Chuang and | 
					
					
						
						| 
							 | 
						               Cheng{-}I Jeff Lai and | 
					
					
						
						| 
							 | 
						               Kushal Lakhotia and | 
					
					
						
						| 
							 | 
						               Yist Y. Lin and | 
					
					
						
						| 
							 | 
						               Andy T. Liu and | 
					
					
						
						| 
							 | 
						               Jiatong Shi and | 
					
					
						
						| 
							 | 
						               Xuankai Chang and | 
					
					
						
						| 
							 | 
						               Guan{-}Ting Lin and | 
					
					
						
						| 
							 | 
						               Tzu{-}Hsien Huang and | 
					
					
						
						| 
							 | 
						               Wei{-}Cheng Tseng and | 
					
					
						
						| 
							 | 
						               Ko{-}tik Lee and | 
					
					
						
						| 
							 | 
						               Da{-}Rong Liu and | 
					
					
						
						| 
							 | 
						               Zili Huang and | 
					
					
						
						| 
							 | 
						               Shuyan Dong and | 
					
					
						
						| 
							 | 
						               Shang{-}Wen Li and | 
					
					
						
						| 
							 | 
						               Shinji Watanabe and | 
					
					
						
						| 
							 | 
						               Abdelrahman Mohamed and | 
					
					
						
						| 
							 | 
						               Hung{-}yi Lee}, | 
					
					
						
						| 
							 | 
						  title     = {{SUPERB:} Speech processing Universal PERformance Benchmark}, | 
					
					
						
						| 
							 | 
						  journal   = {CoRR}, | 
					
					
						
						| 
							 | 
						  volume    = {abs/2105.01051}, | 
					
					
						
						| 
							 | 
						  year      = {2021}, | 
					
					
						
						| 
							 | 
						  url       = {https://arxiv.org/abs/2105.01051}, | 
					
					
						
						| 
							 | 
						  archivePrefix = {arXiv}, | 
					
					
						
						| 
							 | 
						  eprint    = {2105.01051}, | 
					
					
						
						| 
							 | 
						  timestamp = {Thu, 01 Jul 2021 13:30:22 +0200}, | 
					
					
						
						| 
							 | 
						  biburl    = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib}, | 
					
					
						
						| 
							 | 
						  bibsource = {dblp computer science bibliography, https://dblp.org} | 
					
					
						
						| 
							 | 
						} | 
					
					
						
						| 
							 | 
						""" | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						_DESCRIPTION = """\ | 
					
					
						
						| 
							 | 
						Self-supervised learning (SSL) has proven vital for advancing research in | 
					
					
						
						| 
							 | 
						natural language processing (NLP) and computer vision (CV). The paradigm | 
					
					
						
						| 
							 | 
						pretrains a shared model on large volumes of unlabeled data and achieves | 
					
					
						
						| 
							 | 
						state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the | 
					
					
						
						| 
							 | 
						speech processing community lacks a similar setup to systematically explore the | 
					
					
						
						| 
							 | 
						paradigm. To bridge this gap, we introduce Speech processing Universal | 
					
					
						
						| 
							 | 
						PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the | 
					
					
						
						| 
							 | 
						performance of a shared model across a wide range of speech processing tasks | 
					
					
						
						| 
							 | 
						with minimal architecture changes and labeled data. Among multiple usages of the | 
					
					
						
						| 
							 | 
						shared model, we especially focus on extracting the representation learned from | 
					
					
						
						| 
							 | 
						SSL due to its preferable re-usability. We present a simple framework to solve | 
					
					
						
						| 
							 | 
						SUPERB tasks by learning task-specialized lightweight prediction heads on top of | 
					
					
						
						| 
							 | 
						the frozen shared model. Our results demonstrate that the framework is promising | 
					
					
						
						| 
							 | 
						as SSL representations show competitive generalizability and accessibility | 
					
					
						
						| 
							 | 
						across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a | 
					
					
						
						| 
							 | 
						benchmark toolkit to fuel the research in representation learning and general | 
					
					
						
						| 
							 | 
						speech processing. | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						Note that in order to limit the required storage for preparing this dataset, the | 
					
					
						
						| 
							 | 
						audio is stored in the .flac format and is not converted to a float32 array. To | 
					
					
						
						| 
							 | 
						convert, the audio file to a float32 array, please make use of the `.map()` | 
					
					
						
						| 
							 | 
						function as follows: | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						```python | 
					
					
						
						| 
							 | 
						import soundfile as sf | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						def map_to_array(batch): | 
					
					
						
						| 
							 | 
						    speech_array, _ = sf.read(batch["file"]) | 
					
					
						
						| 
							 | 
						    batch["speech"] = speech_array | 
					
					
						
						| 
							 | 
						    return batch | 
					
					
						
						| 
							 | 
						 | 
					
					
						
						| 
							 | 
						dataset = dataset.map(map_to_array, remove_columns=["file"]) | 
					
					
						
						| 
							 | 
						``` | 
					
					
						
						| 
							 | 
						""" | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						class SuperbConfig(datasets.BuilderConfig): | 
					
					
						
						| 
							 | 
						    """BuilderConfig for Superb.""" | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						    def __init__( | 
					
					
						
						| 
							 | 
						            self, | 
					
					
						
						| 
							 | 
						            features, | 
					
					
						
						| 
							 | 
						            url, | 
					
					
						
						| 
							 | 
						            data_url=None, | 
					
					
						
						| 
							 | 
						            supervised_keys=None, | 
					
					
						
						| 
							 | 
						            **kwargs, | 
					
					
						
						| 
							 | 
						    ): | 
					
					
						
						| 
							 | 
						        super().__init__(version=datasets.Version("1.9.0", ""), **kwargs) | 
					
					
						
						| 
							 | 
						        self.features = features | 
					
					
						
						| 
							 | 
						        self.data_url = data_url | 
					
					
						
						| 
							 | 
						        self.url = url | 
					
					
						
						| 
							 | 
						        self.supervised_keys = supervised_keys | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						class Superb(datasets.GeneratorBasedBuilder): | 
					
					
						
						| 
							 | 
						    """Superb dataset.""" | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						    BUILDER_CONFIGS = [ | 
					
					
						
						| 
							 | 
						        SuperbConfig( | 
					
					
						
						| 
							 | 
						            name="asr", | 
					
					
						
						| 
							 | 
						            description=textwrap.dedent( | 
					
					
						
						| 
							 | 
						                """\ | 
					
					
						
						| 
							 | 
						            ASR transcribes utterances into words. While PR analyzes the | 
					
					
						
						| 
							 | 
						            improvement in modeling phonetics, ASR reflects the significance of | 
					
					
						
						| 
							 | 
						            the improvement in a real-world scenario. LibriSpeech | 
					
					
						
						| 
							 | 
						            train-clean-100/dev-clean/test-clean subsets are used for | 
					
					
						
						| 
							 | 
						            training/validation/testing. The evaluation metric is word error | 
					
					
						
						| 
							 | 
						            rate (WER).""" | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            features=datasets.Features( | 
					
					
						
						| 
							 | 
						                { | 
					
					
						
						| 
							 | 
						                    "file": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "audio": datasets.features.Audio(sampling_rate=16_000), | 
					
					
						
						| 
							 | 
						                    "text": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "speaker_id": datasets.Value("int64"), | 
					
					
						
						| 
							 | 
						                    "chapter_id": datasets.Value("int64"), | 
					
					
						
						| 
							 | 
						                    "id": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                } | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            supervised_keys=("file", "text"), | 
					
					
						
						| 
							 | 
						            url="http://www.openslr.org/12", | 
					
					
						
						| 
							 | 
						            data_url="data/LibriSpeech-test-clean.zip", | 
					
					
						
						| 
							 | 
						        ), | 
					
					
						
						| 
							 | 
						        SuperbConfig( | 
					
					
						
						| 
							 | 
						            name="ks", | 
					
					
						
						| 
							 | 
						            description=textwrap.dedent( | 
					
					
						
						| 
							 | 
						                """\ | 
					
					
						
						| 
							 | 
						            Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of | 
					
					
						
						| 
							 | 
						            words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and | 
					
					
						
						| 
							 | 
						            inference time are all crucial. SUPERB uses the widely used Speech Commands dataset v1.0 for the task. | 
					
					
						
						| 
							 | 
						            The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the | 
					
					
						
						| 
							 | 
						            false positive. The evaluation metric is accuracy (ACC)""" | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            features=datasets.Features( | 
					
					
						
						| 
							 | 
						                { | 
					
					
						
						| 
							 | 
						                    "file": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "audio": datasets.features.Audio(sampling_rate=16_000), | 
					
					
						
						| 
							 | 
						                    "label": datasets.ClassLabel( | 
					
					
						
						| 
							 | 
						                        names=[ | 
					
					
						
						| 
							 | 
						                            "yes", | 
					
					
						
						| 
							 | 
						                            "no", | 
					
					
						
						| 
							 | 
						                            "up", | 
					
					
						
						| 
							 | 
						                            "down", | 
					
					
						
						| 
							 | 
						                            "left", | 
					
					
						
						| 
							 | 
						                            "right", | 
					
					
						
						| 
							 | 
						                            "on", | 
					
					
						
						| 
							 | 
						                            "off", | 
					
					
						
						| 
							 | 
						                            "stop", | 
					
					
						
						| 
							 | 
						                            "go", | 
					
					
						
						| 
							 | 
						                            "_silence_", | 
					
					
						
						| 
							 | 
						                            "_unknown_", | 
					
					
						
						| 
							 | 
						                        ] | 
					
					
						
						| 
							 | 
						                    ), | 
					
					
						
						| 
							 | 
						                } | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            supervised_keys=("file", "label"), | 
					
					
						
						| 
							 | 
						            url="https://www.tensorflow.org/datasets/catalog/speech_commands", | 
					
					
						
						| 
							 | 
						            data_url="data/speech_commands_test_set_v0.01.zip", | 
					
					
						
						| 
							 | 
						        ), | 
					
					
						
						| 
							 | 
						        SuperbConfig( | 
					
					
						
						| 
							 | 
						            name="ic", | 
					
					
						
						| 
							 | 
						            description=textwrap.dedent( | 
					
					
						
						| 
							 | 
						                """\ | 
					
					
						
						| 
							 | 
						            Intent Classification (IC) classifies utterances into predefined classes to determine the intent of | 
					
					
						
						| 
							 | 
						            speakers. SUPERB uses the Fluent Speech Commands dataset, where each utterance is tagged with three intent | 
					
					
						
						| 
							 | 
						            labels: action, object, and location. The evaluation metric is accuracy (ACC).""" | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            features=datasets.Features( | 
					
					
						
						| 
							 | 
						                { | 
					
					
						
						| 
							 | 
						                    "file": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "audio": datasets.features.Audio(sampling_rate=16_000), | 
					
					
						
						| 
							 | 
						                    "speaker_id": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "text": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "action": datasets.ClassLabel( | 
					
					
						
						| 
							 | 
						                        names=["activate", "bring", "change language", "deactivate", "decrease", "increase"] | 
					
					
						
						| 
							 | 
						                    ), | 
					
					
						
						| 
							 | 
						                    "object": datasets.ClassLabel( | 
					
					
						
						| 
							 | 
						                        names=[ | 
					
					
						
						| 
							 | 
						                            "Chinese", | 
					
					
						
						| 
							 | 
						                            "English", | 
					
					
						
						| 
							 | 
						                            "German", | 
					
					
						
						| 
							 | 
						                            "Korean", | 
					
					
						
						| 
							 | 
						                            "heat", | 
					
					
						
						| 
							 | 
						                            "juice", | 
					
					
						
						| 
							 | 
						                            "lamp", | 
					
					
						
						| 
							 | 
						                            "lights", | 
					
					
						
						| 
							 | 
						                            "music", | 
					
					
						
						| 
							 | 
						                            "newspaper", | 
					
					
						
						| 
							 | 
						                            "none", | 
					
					
						
						| 
							 | 
						                            "shoes", | 
					
					
						
						| 
							 | 
						                            "socks", | 
					
					
						
						| 
							 | 
						                            "volume", | 
					
					
						
						| 
							 | 
						                        ] | 
					
					
						
						| 
							 | 
						                    ), | 
					
					
						
						| 
							 | 
						                    "location": datasets.ClassLabel(names=["bedroom", "kitchen", "none", "washroom"]), | 
					
					
						
						| 
							 | 
						                } | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						             | 
					
					
						
						| 
							 | 
						            supervised_keys=None, | 
					
					
						
						| 
							 | 
						            url="https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/", | 
					
					
						
						| 
							 | 
						            data_url="data/fluent_speech_commands_dataset.zip", | 
					
					
						
						| 
							 | 
						        ), | 
					
					
						
						| 
							 | 
						        SuperbConfig( | 
					
					
						
						| 
							 | 
						            name="si", | 
					
					
						
						| 
							 | 
						            description=textwrap.dedent( | 
					
					
						
						| 
							 | 
						                """\ | 
					
					
						
						| 
							 | 
						            Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class | 
					
					
						
						| 
							 | 
						            classification, where speakers are in the same predefined set for both training and testing. The widely | 
					
					
						
						| 
							 | 
						            used VoxCeleb1 dataset is adopted, and the evaluation metric is accuracy (ACC).""" | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            features=datasets.Features( | 
					
					
						
						| 
							 | 
						                { | 
					
					
						
						| 
							 | 
						                    "file": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "audio": datasets.features.Audio(sampling_rate=16_000), | 
					
					
						
						| 
							 | 
						                    "label": datasets.ClassLabel(names=[f"id{i + 10001}" for i in range(1251)]), | 
					
					
						
						| 
							 | 
						                } | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            supervised_keys=("file", "label"), | 
					
					
						
						| 
							 | 
						            url="https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html", | 
					
					
						
						| 
							 | 
						            data_url="data/VoxCeleb1.zip" | 
					
					
						
						| 
							 | 
						        ), | 
					
					
						
						| 
							 | 
						        SuperbConfig( | 
					
					
						
						| 
							 | 
						            name="er", | 
					
					
						
						| 
							 | 
						            description=textwrap.dedent( | 
					
					
						
						| 
							 | 
						                """\ | 
					
					
						
						| 
							 | 
						            Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset | 
					
					
						
						| 
							 | 
						            IEMOCAP is adopted, and we follow the conventional evaluation protocol: we drop the unbalance emotion | 
					
					
						
						| 
							 | 
						            classes to leave the final four classes with a similar amount of data points and cross-validates on five | 
					
					
						
						| 
							 | 
						            folds of the standard splits. The evaluation metric is accuracy (ACC).""" | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            features=datasets.Features( | 
					
					
						
						| 
							 | 
						                { | 
					
					
						
						| 
							 | 
						                    "file": datasets.Value("string"), | 
					
					
						
						| 
							 | 
						                    "audio": datasets.features.Audio(sampling_rate=16_000), | 
					
					
						
						| 
							 | 
						                    "label": datasets.ClassLabel(names=['neu', 'hap', 'ang', 'sad']), | 
					
					
						
						| 
							 | 
						                } | 
					
					
						
						| 
							 | 
						            ), | 
					
					
						
						| 
							 | 
						            supervised_keys=("file", "label"), | 
					
					
						
						| 
							 | 
						            url="https://sail.usc.edu/iemocap/", | 
					
					
						
						| 
							 | 
						            data_url="data/IEMOCAP_full_release.zip" | 
					
					
						
						| 
							 | 
						        ), | 
					
					
						
						| 
							 | 
						    ] | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						    def _info(self): | 
					
					
						
						| 
							 | 
						        return datasets.DatasetInfo( | 
					
					
						
						| 
							 | 
						            description=_DESCRIPTION, | 
					
					
						
						| 
							 | 
						            features=self.config.features, | 
					
					
						
						| 
							 | 
						            supervised_keys=self.config.supervised_keys, | 
					
					
						
						| 
							 | 
						            homepage=self.config.url, | 
					
					
						
						| 
							 | 
						            citation=_CITATION, | 
					
					
						
						| 
							 | 
						        ) | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						    def _split_generators(self, dl_manager): | 
					
					
						
						| 
							 | 
						        if self.config.name == "asr": | 
					
					
						
						| 
							 | 
						            archive_path = dl_manager.download_and_extract(self.config.data_url) | 
					
					
						
						| 
							 | 
						            return [ | 
					
					
						
						| 
							 | 
						                datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path}), | 
					
					
						
						| 
							 | 
						            ] | 
					
					
						
						| 
							 | 
						        elif self.config.name == "ks": | 
					
					
						
						| 
							 | 
						            archive_path = dl_manager.download_and_extract(self.config.data_url) | 
					
					
						
						| 
							 | 
						            return [ | 
					
					
						
						| 
							 | 
						                datasets.SplitGenerator( | 
					
					
						
						| 
							 | 
						                    name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path, "split": "test"} | 
					
					
						
						| 
							 | 
						                ), | 
					
					
						
						| 
							 | 
						            ] | 
					
					
						
						| 
							 | 
						        elif self.config.name == "ic": | 
					
					
						
						| 
							 | 
						            archive_path = dl_manager.download_and_extract(self.config.data_url) | 
					
					
						
						| 
							 | 
						            return [ | 
					
					
						
						| 
							 | 
						                datasets.SplitGenerator( | 
					
					
						
						| 
							 | 
						                    name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path, "split": "test"} | 
					
					
						
						| 
							 | 
						                ), | 
					
					
						
						| 
							 | 
						            ] | 
					
					
						
						| 
							 | 
						        elif self.config.name == "si": | 
					
					
						
						| 
							 | 
						            archive_path = dl_manager.download_and_extract(self.config.data_url) | 
					
					
						
						| 
							 | 
						            return [ | 
					
					
						
						| 
							 | 
						                datasets.SplitGenerator( | 
					
					
						
						| 
							 | 
						                    name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path, "split": 3} | 
					
					
						
						| 
							 | 
						                ), | 
					
					
						
						| 
							 | 
						            ] | 
					
					
						
						| 
							 | 
						        elif self.config.name == "sd": | 
					
					
						
						| 
							 | 
						            archive_path = dl_manager.download_and_extract(self.config.data_url) | 
					
					
						
						| 
							 | 
						            return [ | 
					
					
						
						| 
							 | 
						                datasets.SplitGenerator( | 
					
					
						
						| 
							 | 
						                    name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path, "split": "test"} | 
					
					
						
						| 
							 | 
						                ) | 
					
					
						
						| 
							 | 
						            ] | 
					
					
						
						| 
							 | 
						        elif self.config.name == "er": | 
					
					
						
						| 
							 | 
						            archive_path = dl_manager.download_and_extract(self.config.data_url) | 
					
					
						
						| 
							 | 
						            return [ | 
					
					
						
						| 
							 | 
						                datasets.SplitGenerator( | 
					
					
						
						| 
							 | 
						                    name="session1", gen_kwargs={"archive_path": archive_path, "split": 1}, | 
					
					
						
						| 
							 | 
						                ) | 
					
					
						
						| 
							 | 
						            ] | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						    def _generate_examples(self, archive_path, split=None): | 
					
					
						
						| 
							 | 
						        """Generate examples.""" | 
					
					
						
						| 
							 | 
						        if self.config.name == "asr": | 
					
					
						
						| 
							 | 
						            transcripts_glob = os.path.join(archive_path, "LibriSpeech", "*/*/*/*.txt") | 
					
					
						
						| 
							 | 
						            key = 0 | 
					
					
						
						| 
							 | 
						            for transcript_path in sorted(glob.glob(transcripts_glob)): | 
					
					
						
						| 
							 | 
						                transcript_dir_path = os.path.dirname(transcript_path) | 
					
					
						
						| 
							 | 
						                with open(transcript_path, "r", encoding="utf-8") as f: | 
					
					
						
						| 
							 | 
						                    for line in f: | 
					
					
						
						| 
							 | 
						                        line = line.strip() | 
					
					
						
						| 
							 | 
						                        id_, transcript = line.split(" ", 1) | 
					
					
						
						| 
							 | 
						                        audio_file = f"{id_}.flac" | 
					
					
						
						| 
							 | 
						                        speaker_id, chapter_id = [int(el) for el in id_.split("-")[:2]] | 
					
					
						
						| 
							 | 
						                        audio_path = os.path.join(transcript_dir_path, audio_file) | 
					
					
						
						| 
							 | 
						                        yield key, { | 
					
					
						
						| 
							 | 
						                            "id": id_, | 
					
					
						
						| 
							 | 
						                            "speaker_id": speaker_id, | 
					
					
						
						| 
							 | 
						                            "chapter_id": chapter_id, | 
					
					
						
						| 
							 | 
						                            "file": audio_path, | 
					
					
						
						| 
							 | 
						                            "audio": audio_path, | 
					
					
						
						| 
							 | 
						                            "text": transcript, | 
					
					
						
						| 
							 | 
						                        } | 
					
					
						
						| 
							 | 
						                        key += 1 | 
					
					
						
						| 
							 | 
						        elif self.config.name == "ks": | 
					
					
						
						| 
							 | 
						            words = ["yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go"] | 
					
					
						
						| 
							 | 
						            splits = _split_ks_files(archive_path, split) | 
					
					
						
						| 
							 | 
						            for key, audio_file in enumerate(sorted(splits[split])): | 
					
					
						
						| 
							 | 
						                base_dir, file_name = os.path.split(audio_file) | 
					
					
						
						| 
							 | 
						                _, word = os.path.split(base_dir) | 
					
					
						
						| 
							 | 
						                if word in words: | 
					
					
						
						| 
							 | 
						                    label = word | 
					
					
						
						| 
							 | 
						                elif word == "_silence_" or word == "_background_noise_": | 
					
					
						
						| 
							 | 
						                    label = "_silence_" | 
					
					
						
						| 
							 | 
						                else: | 
					
					
						
						| 
							 | 
						                    label = "_unknown_" | 
					
					
						
						| 
							 | 
						                yield key, {"file": audio_file, "audio": audio_file, "label": label} | 
					
					
						
						| 
							 | 
						        elif self.config.name == "ic": | 
					
					
						
						| 
							 | 
						            root_path = os.path.join(archive_path, "fluent_speech_commands_dataset/") | 
					
					
						
						| 
							 | 
						            csv_path = os.path.join(root_path, f"data/{split}_data.csv") | 
					
					
						
						| 
							 | 
						            with open(csv_path, encoding="utf-8") as csv_file: | 
					
					
						
						| 
							 | 
						                csv_reader = csv.reader(csv_file, delimiter=",", skipinitialspace=True) | 
					
					
						
						| 
							 | 
						                next(csv_reader) | 
					
					
						
						| 
							 | 
						                for row in csv_reader: | 
					
					
						
						| 
							 | 
						                    key, file_path, speaker_id, text, action, object_, location = row | 
					
					
						
						| 
							 | 
						                    audio_path = os.path.join(root_path, file_path) | 
					
					
						
						| 
							 | 
						                    yield key, { | 
					
					
						
						| 
							 | 
						                        "file": audio_path, | 
					
					
						
						| 
							 | 
						                        "audio": audio_path, | 
					
					
						
						| 
							 | 
						                        "speaker_id": speaker_id, | 
					
					
						
						| 
							 | 
						                        "text": text, | 
					
					
						
						| 
							 | 
						                        "action": action, | 
					
					
						
						| 
							 | 
						                        "object": object_, | 
					
					
						
						| 
							 | 
						                        "location": location, | 
					
					
						
						| 
							 | 
						                    } | 
					
					
						
						| 
							 | 
						        elif self.config.name == "si": | 
					
					
						
						| 
							 | 
						            wav_path = os.path.join(archive_path, "wav/") | 
					
					
						
						| 
							 | 
						            splits_path = os.path.join(archive_path, "veri_test_class.txt") | 
					
					
						
						| 
							 | 
						            with open(splits_path, "r", encoding="utf-8") as f: | 
					
					
						
						| 
							 | 
						                for key, line in enumerate(f): | 
					
					
						
						| 
							 | 
						                    split_id, file_path = line.strip().split(" ") | 
					
					
						
						| 
							 | 
						                    if int(split_id) != split: | 
					
					
						
						| 
							 | 
						                        continue | 
					
					
						
						| 
							 | 
						                    speaker_id = file_path.split("/")[0] | 
					
					
						
						| 
							 | 
						                    audio_path = os.path.join(wav_path, file_path) | 
					
					
						
						| 
							 | 
						                    yield key, { | 
					
					
						
						| 
							 | 
						                        "file": audio_path, | 
					
					
						
						| 
							 | 
						                        "audio": audio_path, | 
					
					
						
						| 
							 | 
						                        "label": speaker_id, | 
					
					
						
						| 
							 | 
						                    } | 
					
					
						
						| 
							 | 
						        elif self.config.name == "er": | 
					
					
						
						| 
							 | 
						            root_path = os.path.join(archive_path, f"Session{split}/") | 
					
					
						
						| 
							 | 
						            wav_path = os.path.join(root_path, "sentences/wav/") | 
					
					
						
						| 
							 | 
						            labels_path = os.path.join(root_path, "dialog/EmoEvaluation/*.txt") | 
					
					
						
						| 
							 | 
						            emotions = ['neu', 'hap', 'ang', 'sad', 'exc'] | 
					
					
						
						| 
							 | 
						            key = 0 | 
					
					
						
						| 
							 | 
						            for labels_file in sorted(glob.glob(labels_path)): | 
					
					
						
						| 
							 | 
						                with open(labels_file, "r", encoding="utf-8") as f: | 
					
					
						
						| 
							 | 
						                    for line in f: | 
					
					
						
						| 
							 | 
						                        if line[0] != "[": | 
					
					
						
						| 
							 | 
						                            continue | 
					
					
						
						| 
							 | 
						                        _, filename, emo, _ = line.split("\t") | 
					
					
						
						| 
							 | 
						                        if emo not in emotions: | 
					
					
						
						| 
							 | 
						                            continue | 
					
					
						
						| 
							 | 
						                        wav_subdir = filename.rsplit("_", 1)[0] | 
					
					
						
						| 
							 | 
						                        filename = f"{filename}.wav" | 
					
					
						
						| 
							 | 
						                        audio_path = os.path.join(wav_path, wav_subdir, filename) | 
					
					
						
						| 
							 | 
						                        yield key, { | 
					
					
						
						| 
							 | 
						                            "file": audio_path, | 
					
					
						
						| 
							 | 
						                            "audio": audio_path, | 
					
					
						
						| 
							 | 
						                            "label": emo.replace("exc", "hap"), | 
					
					
						
						| 
							 | 
						                        } | 
					
					
						
						| 
							 | 
						                        key += 1 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						def _split_ks_files(archive_path, split): | 
					
					
						
						| 
							 | 
						    audio_path = os.path.join(archive_path, "**/*.wav") | 
					
					
						
						| 
							 | 
						    audio_paths = glob.glob(audio_path) | 
					
					
						
						| 
							 | 
						    if split == "test": | 
					
					
						
						| 
							 | 
						         | 
					
					
						
						| 
							 | 
						        return {"test": audio_paths} | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						    val_list_file = os.path.join(archive_path, "validation_list.txt") | 
					
					
						
						| 
							 | 
						    test_list_file = os.path.join(archive_path, "testing_list.txt") | 
					
					
						
						| 
							 | 
						    with open(val_list_file, encoding="utf-8") as f: | 
					
					
						
						| 
							 | 
						        val_paths = f.read().strip().splitlines() | 
					
					
						
						| 
							 | 
						        val_paths = [os.path.join(archive_path, p) for p in val_paths] | 
					
					
						
						| 
							 | 
						    with open(test_list_file, encoding="utf-8") as f: | 
					
					
						
						| 
							 | 
						        test_paths = f.read().strip().splitlines() | 
					
					
						
						| 
							 | 
						        test_paths = [os.path.join(archive_path, p) for p in test_paths] | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						     | 
					
					
						
						| 
							 | 
						     | 
					
					
						
						| 
							 | 
						    train_paths = list(set(audio_paths) - set(val_paths) - set(test_paths)) | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						    return {"train": train_paths, "val": val_paths} | 
					
					
						
						| 
							 | 
						
 |