diff --git "a/abs_29K_G/test_abstract_long_2405.00966v1.json" "b/abs_29K_G/test_abstract_long_2405.00966v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00966v1.json" @@ -0,0 +1,371 @@ +{ + "url": "http://arxiv.org/abs/2405.00966v1", + "title": "Efficient Compression of Multitask Multilingual Speech Models", + "abstract": "Whisper is a multitask and multilingual speech model covering 99 languages.\nIt yields commendable automatic speech recognition (ASR) results in a subset of\nits covered languages, but the model still underperforms on a non-negligible\nnumber of under-represented languages, a problem exacerbated in smaller model\nversions. In this work, we examine its limitations, demonstrating the presence\nof speaker-related (gender, age) and model-related (resourcefulness and model\nsize) bias. Despite that, we show that only model-related bias are amplified by\nquantization, impacting more low-resource languages and smaller models.\nSearching for a better compression approach, we propose DistilWhisper, an\napproach that is able to bridge the performance gap in ASR for these languages\nwhile retaining the advantages of multitask and multilingual capabilities. Our\napproach involves two key strategies: lightweight modular ASR fine-tuning of\nwhisper-small using language-specific experts, and knowledge distillation from\nwhisper-large-v2. This dual approach allows us to effectively boost ASR\nperformance while keeping the robustness inherited from the multitask and\nmultilingual pre-training. Results demonstrate that our approach is more\neffective than standard fine-tuning or LoRA adapters, boosting performance in\nthe targeted languages for both in- and out-of-domain test sets, while\nintroducing only a negligible parameter overhead at inference.", + "authors": "Thomas Palmeira Ferraz", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SD", + "eess.AS" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "Whisper is a multitask and multilingual speech model covering 99 languages.\nIt yields commendable automatic speech recognition (ASR) results in a subset of\nits covered languages, but the model still underperforms on a non-negligible\nnumber of under-represented languages, a problem exacerbated in smaller model\nversions. In this work, we examine its limitations, demonstrating the presence\nof speaker-related (gender, age) and model-related (resourcefulness and model\nsize) bias. Despite that, we show that only model-related bias are amplified by\nquantization, impacting more low-resource languages and smaller models.\nSearching for a better compression approach, we propose DistilWhisper, an\napproach that is able to bridge the performance gap in ASR for these languages\nwhile retaining the advantages of multitask and multilingual capabilities. Our\napproach involves two key strategies: lightweight modular ASR fine-tuning of\nwhisper-small using language-specific experts, and knowledge distillation from\nwhisper-large-v2. This dual approach allows us to effectively boost ASR\nperformance while keeping the robustness inherited from the multitask and\nmultilingual pre-training. Results demonstrate that our approach is more\neffective than standard fine-tuning or LoRA adapters, boosting performance in\nthe targeted languages for both in- and out-of-domain test sets, while\nintroducing only a negligible parameter overhead at inference.", + "main_content": "Introduction 1.1 Motivation Over the past three years, the field of Natural Language Processing (NLP) has been revolutionized by the introduction of large pre-trained models, often referred to as \"foundation models.\" These models, both for text and speech, are trained on vast amounts of unlabeled data and can subsequently be fine-tuned for specific tasks using limited labeled data. Multilingual foundation models have garnered significant attention due to their ability to handle hundreds of languages within a single model. However, they face a challenge known as the curse of multilinguality: in order to maintain high performance across all supported languages, these models require an increase in the number of parameters, leading to larger memory requirements and slower inference times. This can render the use of such models impractical in certain scenarios. To address this issue, research has been conducted on model compression techniques, although these methods may inadvertently exacerbate biases present in the model. This internship focuses on OpenAI\u2019s Whisper, a family of multilingual multi-task speech models known for their impressive performance in speech recognition. These models exhibit robustness when transcribing speech recorded under various conditions, surpassing the capabilities of previous models. However, there remain important questions to explore regarding Whisper and its multitask learning approach. Although the model presents exceptional capability for transcribing and translating English, its performance in other languages indicates a decline in multilingual capabilities as the model size decreases. Additionally, we aim to investigate how this multilingual architecture handles biases related to different speakers, including gender, age, and accent. These questions drive our research to enhance the understanding of Whisper\u2019s capabilities and limitations. 1.2 Internship Objectives This internship has three main objectives: (1) Conduct a comprehensive analysis of bias within the Whisper model family, with a specific focus speaker-related (gender, age, accent) and modelrelated (model size, resourcefulness, similar languages) biases; \f2 INTRODUCTION (2) Explore how light compression techniques, such as quantization, may either mitigate or exacerbate any identified biases within the Whisper models; (3) Propose a better compression approach that effectively reduces any disparities found in the models. 1.3 Contributions of this work This work offers two significant contributions. Firstly, it provides a comprehensive analysis of the biases present in the Whisper model and examines how quantization impacts these biases. Secondly, it introduces an alternative model compression method called DistilWhisper, which enhances the performance of smaller Whisper models. Additionally, all models and code developed in this research will be made available as open-source resources. The structure of this report is as follows: Chapter 2 provides essential fundamentals and a comparison with related work to establish a foundational understanding. Chapter 3 details the experimental setup and results of the investigation into bias when quantizing Whisper. This investigation leads to the proposal of DistilWhisper, in Chapter 4, a novel parameter-efficient distillation approach that leverages small pre-trained models. Chapter 5 covers the validation of the proposed approach, as well as some interesting analysis. Finally, Chapter 6 summarizes the primary findings and conclusions of this work. 1.4 About NAVER LABS Europe NAVER LABS is the R&D subsidiary of NAVER, Korea\u2019s leading internet company and the part of NAVER responsible for creating future technology. Its world-class researchers in Korea and Europe create new connections between people, machines, spaces and information by advancing technology in AI, robotics, autonomous driving, 3D/HD mapping and AR. NAVER LABS Europe is the biggest industrial research lab in artificial intelligence in France and a hub of NAVER\u2019s global AI R&D Belt, a network of centers of excellence in Korea, Japan, Vietnam, USA & Europe. The scientists at NAVER LABS Europe conduct fundamental and applied research in machine learning (optimization, robotics), computer vision, natural language processing and UX and ethnography. The site is located in Grenoble, France. \fBACKGROUND AND RELATED WORK 3 2 Background and Related Work 2.1 State of the Art for Automatic Speech Recognition Current ASR approaches primarily involve adapting pre-trained Transformer stacks (Vaswani et al., 2017), which are initially trained through self-supervised learning (SSL) on unlabeled audio data. These pre-trained models can vary in their use of pre-text tasks (e.g., wav2vec 2.0 (Baevski et al., 2020), HuBERT (Hsu et al., 2021), WavLM (Chen et al., 2022)) and the range of languages they cover (e.g., XLSR-53 (Conneau et al., 2021), XLS-R (Babu et al., 2022), MMS (Pratap et al., 2023), Google-USM (Y. Zhang et al., 2023)). This development of models has also seen the introduction of monolingual and multilingual SSL benchmarks. Examples of such benchmarks include SUPERB for English (Yang et al., 2021), LeBenchmark (Evain et al., 2021) for French, and ML-SUPERB (Shi et al., 2023), which covers 143 languages. In contrast to this line of research, the Whisper model relies on weak supervision, meaning it is trained solely on weakly labeled data (without self-supervision). Nevertheless, with an ample amount of data, the Whisper model achieves competitive results when compared to monolingual (Gandhi et al., 2022; Radford et al., 2023) and multilingual (Pratap et al., 2023) SSL models. More details about Whisper can be found on Section 2.6. For broader ASR benchmarks, facilitating comparisons between SSL pretraining and multitasking weakly-supervised training, the ESB benchmark from HuggingFace (Gandhi et al., 2022) for English is an illustrative example. 2.2 Domain Adaptation Domain adaptation consist in the process of adapting a pre-existing trained model to a new domain or task with minor weight adjustments, rather than retraining the entire model from scratch. In the past, this adaptation was primarily carried out through full fine-tuning, where all the model\u2019s weights were updated. In the case of Transformerbased models, it is also common to proceed adaptation choosing to update only specific layers, usually the final ones (Laskar et al., 2022). More recently, the practice of domain adaptation has seen the emergence of Adapterbased techniques, initially proposed by Houlsby et al. (2019). Adapters are lightweight modules commonly used in both NLP and Speech to adapt pre-trained models to new tasks or domains. In speech-related tasks, Adapter-based fine-tuning has found applications in speech translation (Antonios et al., 2022; Gow-Smith et al., 2023; Le et al., 2021), domain adaptation (Thomas et al., 2022; Tomanek et al., 2021), and other \f4 BACKGROUND AND RELATED WORK tasks. They have demonstrated comparable performance to standard fine-tuning while utilizing only a fraction of trainable parameters. Furthermore, there are efforts to adapt Whisper models to specific tasks using LoRA adapters (e.g. Arabic dialect identification (Radhakrishnan et al., 2023), spoken language understanding (M. Wang et al., 2023), emotion recognition (Feng & Narayanan, 2023)). This technique is elaborated in Section 2.2.1. Additionally, some work involves full fine-tuning for task adaptation (e.g child spoken language understanding (Jain et al., 2023)). In contrast to adapters and full fine-tuning, our work introduces gated Language-specific layers into the Whisper model and presents a parameter-efficient Knowledge Distillation approach. These innovations enhance the model\u2019s robustness to out-of-domain data. 2.2.1 Low-rank Adapters (LoRA) Low-rank Adapter (LoRA) fine-tuning, as proposed by Hu et al. (2022), is a technique designed to reduce memory requirements for domain adaptation. This is achieved by introducing new trainable parameters into a pre-trained neural network while keeping the original pre-trained model weights fixed. These introduced parameters take the form of trainable rank decomposition matrices, and they are inserted between specific layers or blocks of the model. This approach significantly reduces the number of parameters that need to be fine-tuned when adapting the model for specific downstream tasks. For example, when fine-tuning a multilingual multi-task model for a single language and task, LoRA adapters help streamline the adaptation process. The key assumption behind LoRA is that weight matrix updates in Transformer-based models exhibit a low \"intrinsic rank\" when undergoing full fine-tuning. This means that a pre-trained weight matrix, denoted as W0 \u2208Rd\u00d7k, can be effectively represented using a low-rank matrix decomposition, denoted as W0 + \u2206W = W0 + BA, where B \u2208Rd\u00d7r, A \u2208Rr\u00d7k, and the rank r \u226amin(d, k). Importantly, during LoRA fine-tuning, the W0 part remains fixed (frozen) and does not receive gradient updates, while A and B become sets of trainable parameters. h = W0x + \u2206Wx = W0x + BAx (2.1) One significant advantage of this approach is that it allows for parallel computation during the forward pass. Specifically, the forward pass output h can be efficiently computed \fBACKGROUND AND RELATED WORK 5 in parallel, and then the partial results are summed coordinate-wise, as presented in Equation 2.1. 2.3 Quantization Quantization is a well-established technique in the field of Deep Learning, employed to increase the efficiency of neural networks. Historically, neural networks were often trained using low-precision numerical representations (Hubara et al., 2017). However, a recent trend, particularly in NLP , involves post-training quantization. This technique entails applying quantization to models after they have been trained with regular, higher precision. This approach has gained traction as it offers the dual benefits of reducing inference latency and model size. Post-training quantization has found widespread use in various domains, including machine translation and language models (Bondarenko et al., 2021; Liang et al., 2021; Menghani, 2023; Wu et al., 2020). Quantized NLP models have yielded promising results, making it an appealing approach. One of the most widely adopted techniques for post-training quantization in both NLP and speech communities is the LLM.int8() algorithm (Dettmers et al., 2022). This method implements quantization in the feed-forward and attention projection layers of the Transformer architecture. The method has two parts: vector-wise quantization and mixed precision decomposition. In the vector-wise quantization, it is determined conversion constants that allow for the recovery of original numbers from 8-bit to 16-bit floating-point representations. This enables matrix multiplication to be carried out in the lower 8-bit precision. Moreover, in the mixed precision decomposition, it identifies potential outliers that could be adversely impacted by reduced precision and then executes this part of the matrix multiplication in 16-bit precision. While initially designed for decoder-only large language models (LLMs), this quantization method, along with its 4-bit variation (Dettmers & Zettlemoyer, 2023), has gained widespread adoption for various Transformer-based models. It has been made readily available in the Transformers library by Hugging Face (Wolf et al., 2020), contributing to its popularity. Additionally, it is becoming common to combine this quantization technique with domain adaptation methods. For instance, the QLoRA (Dettmers et al., 2023) method incorporates LoRA adapters on top of a quantized Transformer model. \f6 BACKGROUND AND RELATED WORK 2.4 Knowledge Distillation Knowledge distillation (KD) has been initially proposed by Hinton et al. (2015) to distill knowledge from ensemble of models into a single model. Over time, KD has evolved to distill knowledge from a large teacher model into smaller student models (Mohammadshahi et al., 2022; Sanh et al., 2020; Shen et al., 2023). Knowledge distillation can be approached in two primary ways: representation matching or distribution matching. In this work, our focus is on distribution matching. Traditional distribution matching knowledge distillation methods involves minimizing the Kullback\u2013Leibler (KL) divergence between a teacher model and a student model. This is mathematically represented by Equation 2.2: JKL = DKL(p\u2225q\u03b8) = EY\u223cp \u0014 log p(Y) q\u03b8(Y) \u0015 (2.2) where p is the teacher distribution, q\u03b8 is the student distribution, and Y is sampled from the teacher distribution. However, learning based on KL divergence at the sequence level can often lead to the student distribution becoming overly smooth, as it attempts to cover the entire support of the teacher distribution. This behavior arises due to the asymmetric nature of the KL divergence, a phenomenon sometimes referred to as the mode-averaging problem, as demonstrated by (Wen et al., 2023). Recent research (Go et al., 2023; Wen et al., 2023) have shown that symmetric divergences, such as the Jensen-Shannon (JS) divergence, exhibit fewer borderline behaviors and tend to yield improved results in sequence-level distillation. Traditional JS divergence is expressed in Equation 2.3: JJS = DJS(p\u2225q\u03b8) = 1 2EY\u223cp h log p(Y) m(Y) i + 1 2EY\u2032\u223cq\u03b8 h log q\u03b8(Y\u2032) m(Y\u2032) i (2.3) where p is the teacher distribution, q\u03b8 is the student distribution, Y and Y\u2032 are sampled from the teacher\u2019s and student\u2019s distributions and compared with their average m(\u00b7) = 1 2p(\u00b7) + 1 2q\u03b8(\u00b7). 2.5 Datasets for Multilingual ASR Here we present two widely used massively-multilingual datasets that will be used in this work: CommonVoice 13.0 and FLEURS. \fBACKGROUND AND RELATED WORK 7 2.5.1 CommonVoice 13.0 The CommonVoice 13.0 (CV-13) corpus (Ardila et al., 2020), represents the latest iteration of a massively multilingual collection of transcribed speech. It serves as a valuable resource for research and development in the field of speech technology. While primarily designed for Automatic Speech Recognition (ASR) applications, this dataset also finds utility in other domains, such as language identification. The utterances comprising this dataset are sourced from Wikipedia articles and supplemented with utterances contributed by language communities. These are subsequently narrated by contributors through Mozilla\u2019s website or iPhone app. To ensure data quality, contributions undergo validation by other volunteers, with only validated data being incorporated into the train, validation, and test subsets splits of the dataset. As of the current version, the dataset encompasses a rich tapestry of 110 languages, though the number of utterances per language varies significantly. 2.5.2 FLEURS The FLEURS (Conneau et al., 2023) is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark (Goyal et al., 2022), with approximately 12 hours of speech supervision per language. It was meant for few-shot learning on a variety of speech tasks, including Automatic Speech Recognition, Speech Language Identification, Speech Translation and Retrieval. The creation of this dataset involved the recording of all the publicly available sentences from FLoRes-101 (from dev and devtest split subsets). Each sentence was recorded by three paid native-speaker experts per language. Subsequently, these spoken sentences underwent a thorough evaluation by paid evaluators to ensure the overall quality and accuracy of the recorded content. The dataset is unbalanced as not all the sentences were validated, but most part of the languages have between 2400 and 3300 utterances on the train split, with an average 12 seconds per audio sample. 2.6 The Whisper Model In this section we present Whisper (Radford et al., 2023), the base model for the studies conducted in this work. \f8 BACKGROUND AND RELATED WORK \u22ef \u22ef 2\u00a0\u00d7 Conv1D + GELU \u22ee cross attention Log-Mel Spectrogram ~ SOT EN TRANSCRIBE 0.0 The quick Tokens in Multitask Training Format Transformer Encoder Blocks Transformer Decoder Blocks EN 0.0 The quick brown \u22ee \u22ee next-token prediction Sinusoidal Positional Encoding Learned Positional Encoding Multitask training data (680k hours) Sequence-to-sequence learning Multitask training format English transcription Any-to-English speech translation Non-English transcription No speech \ud83d\udde3\ufe0f \u00a0 \u201cAsk not what your country can do for \u22ef\u201d \ud83d\udcdd\u00a0\u00a0Ask not what your country can do for \u22ef \ud83d\udde3\ufe0f \u00a0 \u201cEl r\u00e1pido zorro marr\u00f3n salta sobre \u22ef\u201d \ud83d\udcdd\u00a0 The quick brown fox jumps over \u22ef \ud83d\udde3\ufe0f \u00a0\u201c\uc5b8\ub355 \uc704\uc5d0 \uc62c\ub77c \ub0b4\ub824\ub2e4\ubcf4\uba74 \ub108\ubb34\ub098 \ub113\uace0 \ub113\uc740\u00a0\u22ef\u201d \ud83d\udcdd\u00a0\u00a0\uc5b8\ub355 \uc704\uc5d0 \uc62c\ub77c \ub0b4\ub824\ub2e4\ubcf4\uba74 \ub108\ubb34\ub098 \ub113\uace0 \ub113\uc740\u00a0\u22ef \ud83d\udd0a\u00a0(background music playing) \ud83d\udcdd\u00a0 \u2205 PREV special tokens text tokens timestamp tokens START OF TRANSCRIPT LANGUAGE TAG NO SPEECH EOT TRANSCRIBE TRANSLATE begin time NO TIMESTAMPS \u22ef end time text tokens begin time end time text tokens text tokens Voice activity detection (VAD) Custom vocabulary / prompting Time-aligned transcription Text-only transcription (allows dataset-specific fine-tuning) X\u00a0\u2192 English Translation\u00a0 previous text tokens X\u00a0\u2192 X Transcription\u00a0 Language identification MLP self attention MLP self attention MLP self attention MLP cross attention self attention MLP cross attention self attention MLP cross attention self attention TRANSCRIBE Figure 1 The Whisper model architecture (Source: Radford et al. (2023)) 2.6.1 Overview Whisper is designed to serve as a versatile end-to-end Automatic Speech Recognition (ASR) model suitable for a wide range of applications and languages. When it comes to ASR, previous research has predominantly focused on two key approaches: large-scale Unsupervised Learning (Y. Wang et al., 2022) and Supervised Learning as discussed in Section 2.1. In the case of large-scale Unsupervised Learning, models benefit from training on vast, low-cost, and unlabeled datasets, which helps in building a high-quality encoding component. However, these models generate output that is not directly usable for ASR applications and requires further fine-tuning. On the other hand, Supervised Learning approaches utilize pretrained models that can be directly used for ASR tasks. However, they often struggle to generalize when faced with shifts in the data distribution, primarily due to the limited size of the datasets they were originally trained on. Additionally, creating large-scale human labeled datasets for these models can be prohibitively expensive. \fBACKGROUND AND RELATED WORK 9 Whisper takes a unique approach by introducing Weakly Supervised Learning, striking a balance between data quality and quantity. The Whisper training dataset is curated by collecting pairs of audio and corresponding transcripts from the internet (mainly YouTube videos). After some minimal processing, that included employing language identification with the model proposed by Valk and Alum\u00e4e (2021), this dataset comprises a substantial 680, 000 hours of highly diverse audio content. Notably, it encompasses 96 languages besides English, with approximately 17.2% of the dataset consisting of audio and transcript pairs in the same language (ASR). Additionally, around 18.4% of the pairs have English-translated transcripts. This unique approach provides Whisper with several advantages. Firstly, the Whisper encoder benefits from the rich and diverse dataset, making it perform exceptionally well, similar to Unsupervised settings. Secondly, Whisper is trained with relatively clean labels, allowing it to be used in a Zero-Shot manner without the need for extensive finetuning. 2.6.2 Architecture The architecture of Whisper consists of the original Transformer architecture (Vaswani et al., 2017) preceded by dimension reduction layer called stem. The architecture is visually depicted in Figure 1. Stem The stem comprises a pair of 1-dimensional Convolution Layers, each accompanied by GELU activations. Both convolution layers employ filters of size 3 and produce d output channels. The value of d varies across different sizes of the Whisper architectures. The first convolution layer operates with a stride of 1, while the second employs a stride of 2 (effectively reducing the length of the input sequence by half). Consequently, the output of the stem consists of a sequence of 1500 elements, each with dimension d. As the self-attention layers in a Transformer exhibit quadratic complexity concerning the sequence length, for a fixed hidden representation size of d, the stem significantly reduces the computational complexity by a factor of 4. Transformer In their work, Radford et al. (2023) primarily highlights the impact of scaled Weak Supervision on ASR system performance, with less emphasis on architectural modifications. The base architecture employed for Whisper is the encoder-decoder Trans\f10 BACKGROUND AND RELATED WORK former, which is renowned for its scalability and reliability in several sequence-tosequence tasks. However, the Whisper Transformer does introduce a few key modifications compared to the original Transformer architecture. Sinusoidal encodings are added to the input representations of the encoder, while the positional encodings in the decoder are learned. Additionally, GELU activation functions are used instead of ReLU, and these activations are applied following the residual blocks. Moreover, a normalization layer is included in the encoder\u2019s output. Furthermore, Whisper offers a range of five different architecture sizes, as detailed in Table 1. These varying sizes cater to different requirements and performance needs, allowing for flexibility in ASR tasks. Model Layer (L) Width (d) Parameters Tiny 4 384 39M Base 6 512 74M Small 12 768 244M Medium 24 1024 769M Large 32 1280 1550M Table 1 Architectural specifications for the Whisper model family. L denotes the number of layers per block, indicating that, for example, the tiny model with L = 4 consists of 4 transformer layers in the encoder and 4 in the decoder. Tokenization To tokenize transcripts, the Whisper model employs the BPE (Byte Pair Encoding) tokenizer originally introduced in GPT-2 by Radford et al. (2019). When dealing with languages other than English, the tokenizer is adapted by refining it until the vocabulary size matches that of English. 2.6.3 Multitasking Whisper is trained and operates as a multitask model, capable of handling various sub-tasks within a single end-to-end architecture. These sub-tasks encompass Voice Activity Detection, Language Identification, Text Alignment, Transcription, Translation, and more. To delineate each task and the expected format of the subsequent predictions, specific tokens are employed, as delineated in Table 2. These tokens are positioned at the start of the output sequence, providing task context (see Figure 1). Token generation follows an auto-regressive process, reliant on prior tokens. For ex\fBACKGROUND AND RELATED WORK 11 ample, when the detected language is French, the model computes the likelihood of token w at position k\u2032, as illustrated in Equation 2.4: P(wk\u2032 = w| . . . , <|fr|>, |transcribe|, . . . , wk\u2032\u22121, X) (2.4) Consequently, the generated tokens will probably only belong to the French vocabulary as they have higher conditional probabilities compared to ones belonging to other languages. Tasks Tokens Language Identification <|LANGUAGE|> e.g. <|en|>, <|gl|>, <|fr|>, <|fa|>, etc. Voice Activity Detection <|nospeech|> Transcribe <|transcribe|> Translate <|translate|> Alignment <|notimestamps|> Table 2 Subset of special tokens associated with Whisper\u2019s multitasks. For Language Identification, each language is specified with a token, and a single token is added to the sequence. This token is required. For Voice Activity Detection, only when the audio does not contain clear speech that its corresponding token is present in the output. The tasks Transcribe and Translate are mutually exclusive, but one of them is required. Additionally, certain special tokens can be predefined to simplify predictions. In our work, we specifically enforce transcription and language tokens, thereby eliminating dependency on Language Identification quality for under-represented languages. Tasks not pertinent to our study are also disregarded. \f12 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS 3 Bias Analysis on Quantized Speech Models In this chapter, we aim at addressing the two first objective of the internship: understand the bias presented on Whisper models, and investigate how these are impacted by the employment of quantization. 3.1 Experimental Setup 3.1.1 Dataset preparation In our research, we employed the two widely recognized datasets described in Section 2.5: FLEURS and Common Voice 13.0 (CV-13). These datasets provide valuable speaker-related information, including gender, language group (in the case of FLEURS), accent (exclusive to CV-13), and age (exclusive to CV-13). Building upon the information available in FLEURS, we curated a gender-balanced benchmark, which we refer to as Balanced-FLEURS. The primary goal here was to mitigate the influence of confusion variables such as sentence complexity and gender imbalance (where certain languages exhibit a higher percentage of speakers from one gender). To achieve this, we mixture the train, validation, and test sets of FLEURS, meticulously filtering them to ensure that each sentence was narrated by both a male and a female speaker. Meanwhile, we also ran a Voice Activity Detection model on the dataset, as we encountered a notable number of empty audio files in Spanish, Norwegian, and Malay1. We include in the experiments only the languages in which we were able to find at least 200 utterances. In addition to Balanced-FLEURS, we made use of the Common Voice 13.0 dataset, specifically its validation set. In this case, we leveraged gender and age information. While we attempted to incorporate accent information in our study as well, we encountered challenges in aggregating a sufficiently large dataset, even after merging the train, test, and validation splits. Consequently, we do not report our results with respect to accents. 3.1.2 Resourcefulness categorization In the course of our experiments, we have introduced a resourcefulness classification system specifically tailored to weakly-supervised speech models, with a primary focus 1 We have reported this issue to the Google Team via HuggingFace, listing all problematic files. The corresponding issue can be found here: https://huggingface.co/datasets/google/fleurs/discussions/16#6442a217f8b647fa4f50c489 \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 13 on the transcription task (ASR). This categorization is designed to group languages based on the amount of training data used in the model pre-training. The classification involves clustering languages into categories with similar amounts of training data, and the intervals used for this classification can be found in Table 3. Resourcefulness ASR Training data (h) Super High-Resource \u22655000 High-resource [1000, 5000) Mid-to-high-resource [500, 1000) Low-to-mid-resource [100, 500) Low-resource [10, 100) Extremely Low-Resource (0, 10) Table 3 Proposed Language resourcefulness categorization for Weakly-supervised ASR models It is worth noting that our proposed classification system has a limitation in the context of Whisper. Specifically, it does not account the volume of training data available for the speech translation task. While this data does not directly impact the quality of generated text data for a language (since in Whisper, translation data available is to English only), it does play a role in enhancing the model\u2019s speech encoding capabilities. 3.2 Bias evaluation on Whisper In this section, we present preliminary experiments conducted on the Whisper model. Our aim here is to investigate whether bias exists in the original versions of Whisper. To achieve this, we compare Whisper\u2019s performance on the validation split of CV-13 and on Balanced-FLEURS. Our analysis involves an aggregate approach, where we average the metrics across languages. Figures 2 (Balanced-FLEURS) and 3 (CV-13) showcase the Word Error Rate (WER) performance across the languages covered in the two datasets for whisper-large-v2. These results reveal a clear correlation between performance and resourcefulness, with lower resource languages (Low and Extremely Low-Resource) consistently exhibiting poorest performance. Naturally, the impact varies among languages, possibly due to their complexity or the amount of training data available for closely-related languages. These findings collectively suggest a bias linked to resourcefulness. \f14 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 2 Performance across languages on whisper-large-v2 on Balanced-FLEURS. Languages are ranked on x-axis based its performance. Figure 4 illustrates the average relative difference between male and female speakers for Balanced-FLEURS on whisper-large-v2. This metric, already employed is previous similar study by Boito et al. (2022), is relevant here as the sentences are consistently the same across genders. Meanwhile, Figure 5 displays the absolute difference (following Costa-juss\u00e0 et al. (2022)) in WER between male and female speakers on CV-13. In both cases, the results show varying degrees of gender bias across different languages. Remarkably, these biases are consistent across the different datasets, implying that each language possesses its unique bias, likely attributed to the quality and diversity of its training data. While the model does exhibit gender bias, it is essential to note that, for the most part, this bias remains within a maximum average WER difference of 3 for the majority of languages (in the case of CV-13). Figure 6 extends the analysis by presenting WER performance across different languages on Balanced-FLEURS, mirroring Figure 2. However, this time, we consider all available model sizes within the Whisper family. Languages are ranked by resourcefulness. These results unveil two significant findings: (i) the performance trend aligns across nearly all languages, suggesting a consistent ranking of languages based on performance across all models; and (ii) notably, a clear correlation emerges between smaller model sizes and reduced performance, with the model curves closely overlapping. This phenomenon likely stems from the curse of multilinguality, wherein less resourceful languages exhibit larger performance disparities among model sizes. Addi\fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 15 Figure 3 Performance across languages on whisper-large-v2 on CV-13. Languages are ranked on x-axis based its performance. Figure 4 Average relative WER difference between male and female voice for Balanced-FLEURS. Languages are ranked on x-axis based its relative difference and resourcefulness. tionally, it\u2019s worth noting the differences between large and large-v2 models. Although both models share the same size, the former benefits from more extensive training, additional optimization steps, and data augmentation techniques. Finally, these findings collectively shed light on bias associated with architecture size, despite models being trained with the same dataset. \f16 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 5 Absolute WER difference between male and female voice for CV-13. Languages are ranked on x-axis based its absolute difference. Figure 6 Performance across languages and across different whisper sizes on Balanced-FLEURS. Languages are ranked on x-axis based its resourcefulness. 3.3 Bias evaluation on quantized Whisper Now, we delve into the quantized version of Whisper. In this set of experiments, we apply the LLM.int8() method (Dettmers et al., 2022) (described in Section 2.3) to Whisper. The primary objective of this study is to investigate whether the biases observed in the original Whisper model persist, diminish, or intensify after quantization. In essence, we seek to understand what model features may be forgotten due to quantization. In contrast to the previous section, our analysis here adopts a sentence-level approach. We compare the model\u2019s performance on individual sentences before and after quantization. To ensure a fair evaluation, we exclude sentences with initial Word Error Rate (WER) values greater than or equal to 100. For this sentence-level analysis, we create histograms based on the absolute difference in WER before and after compression. We categorize sentences into three groups: those that worsened (WER increased by \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 17 more than 5), those that remained similar (WER difference less than 5), and those that improved (WER reduced by more than 5). Figure 7 Histogram of performance degradation by quantization per gender on Balanced-FLEURS Figure 8 Histogram of performance degradation by quantization per gender on CV13 Figures 7 (Balanced-FLEURS) and 8 (CV-13) present histograms categorized by gender for the whisper-large-v2 model. Figure 3 displays histograms categorized by age group for CV-13. The data clearly indicates that quantization equally impacts all genders and age groups, implying that gender and age biases are kept unchanged after quantization. In figures 10 (Balanced-FLEURS) and 11 (CV-13), we illustrate histograms categorized by language resourcefulness for whisper-large-v2. Here, a distinct pattern emerges: lower-resource languages are more significantly affected by quantization. While almost all sentences in super high-resource languages maintain their performance, approximately 25% of sentences in extremely low-resource languages are impacted (in the case of Balanced-FLEURS). Consequently, quantization amplifies the resourcefulness bias. \f18 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 9 Histogram of performance degradation by quantization per age group on CV-13 Figure 10 Histogram of performance degradation by quantization per resourcefulness group on Balanced-FLEURS Lastly, in figure 12 (Balanced-FLEURS) and ?? (CV-13), we present histograms considering all available model sizes within the Whisper family, grouped by model size. The results highlight significant differences in how quantization affects models of varying sizes. While a small proportion of sentences are impacted for whisper-large-v2, there is a striking contrast, with almost half of the sentences affected in the case of whisper-tiny. This highlights that the bias related to architecture size is significantly amplified by quantization. This last finding indicates that smaller models are generally more susceptible to the effects of quantization. This observation is particularly concerning as many parameterefficient domain adaptation methods in use today in NLP and Speech involve applying quantization first, without considering the model size. This calls for practitioners to \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 19 Figure 11 Histogram of performance degradation by quantization per resourcefulness group on CV-13 Figure 12 Histogram of performance degradation by quantization per model size on Balanced-FLEURS exercise caution when adapting pre-trained models to avoid the addition of unintended bias. \f20 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS 3.4 Summary of the main findings Here we present the key takeaways from this chapter. First, Whisper exhibits certain speaker-related biases, such as gender and age. These biases are kept unchanged after applying quantization to the model. On the other hand, biases associated with the model itself (model-related bias), including language resourcefulness and architecture size, are amplified by quantization. Overall, Low-resource languages are the most adversely affected by quantization. Moreover, there is a clear pattern on the architecture size, with smaller models experiencing more significant performance degradation compared to larger ones. This is concerning as current parameter-efficient approaches (such as QLoRA presented on Section 2.3) mostly apply quantization first, regardless of the model size. This presents a significant challenge: Can we enhance the performance of smaller models for languages where they currently perform poorly, even though the best model performs well? We aim at searching an alternative to quantization to reduce the model size. \fDISTILWHISPER 21 4 DistilWhisper One prominent observation is the significant Automatic Speech Recognition (ASR) performance gap between the whisper-large-v2 model and its counterparts of smaller sizes, especially when applied to a diverse set of languages. This gap in performance is noticeable across a wide spectrum of languages, that include the low-resource ones, but also many midand high-resource languages. As our earlier analysis, outlined in Chapter 3, revealed, the \"lower\" resource languages are also the most affected by lightweight compression techniques. This phenomenon is often referred to as the curse of multilinguality (as discussed in related works by Arivazhagan et al. (2019), Conneau et al. (2020), and Goyal et al. (2021)). It stems from the inherent challenge that arises when attempting to cover an extensive array of languages within a single model the performance inevitably suffers unless the model is significantly scaled up. This leads us to the central question that has motivated our research: Can we improve the performance of smaller models for languages in which they currently perform poorly, but the best model performs well? A common approach to address this challenge of achieving efficient inference could be distilling knowledge from a larger multilingual teacher model into a smaller pre-existing one, as highlighted in prior works such as the ones done by Sanh et al. (2020) and Mohammadshahi et al. (2022). However, when it comes to applying such knowledge distillation (KD) to whisper-large-v2, which represents the best and largest Whisper model, we face a significant hurdle. This is because we need access to information that is not readily available, such as comprehensive training data spanning all tasks and languages, and its original learning objective, in order to maintain the original model\u2019s robustness. Recent research findings, exemplified by works like Pfeiffer et al. (2022) and Pratap et al. (2023), have demonstrated an alternative solution to the curse of multilinguality. This approach involves equipping moderately sized models with language-specific (LS) modules. This sparse architectural design permits the extension of model parameters through additional modules as more languages are incorporated into the model. Consequently, it ensures consistent performance across languages without incurring substantial additional computational costs during inference. In light of the overarching goal to enhance model performance for various languages within the constraints of limited model capacity, our work introduces the DistilWhisper approach. We incorporate conditional language-specific routing (CLSR) modules, as described by B. Zhang et al. (2021), into a smaller version of Whisper. We then opti\f22 DISTILWHISPER Decoder CLSR Layer Cross-Attention Self-Attention Encoder CLSR Layer Self-Attention whisper-large-v2 LKD LCLSR whisper-small + CLSR Fine-tuning dataset LK x12 Fine-tuned\u00a0 Language-specific Layers Shared all ca cs uk ... g g x12 Frozen\u00a0 g Figure 13 The DistilWhisper optimization approach (left), and its architecture (right). The feed-forward is replaced by a CLSR module, where the LS gates (g) learn to alternate between the pre-trained frozen multilingual representation and the LS layer. mize these modules jointly through ASR fine-tuning and knowledge distillation from a larger Whisper model (whisper-large-v2). For a visual representation of our architecture, please refer to Figure 13, and in the subsequent sections, we delve into the key components of our approach. Following, in this chapter, we detail the elements that make up our approach. Then, in the next chapter (Chapter 5), we will present how we validate this approach and its results following the DistilWhisper approach presented here. 4.1 Conditional Language-Specific Routing We extend Conditional Language-Specific Routing (CLSR) modules proposed by B. Zhang et al. (2021), and commonly used in Multilingual Neural Machine Translation, for the first time to the speech domain. This module, which introduces sparsity to the Transformer architecture, learns a hard binary gate g(\u00b7) for each input token by using its hidden embedding zl. These decisions enable a layer to selectively guide information through either a LS path denoted as hlang or a shared path referred to as hshared, as in Eq. 4.1: CLSR(zl) = g(zl) \u00b7 hlang(zl) + (1 \u2212g(zl)) \u00b7 hshared(zl). (4.1) In contrast to the original CLSR, in this work we use language-specific gates as shown in Figure 13, instead of sharing them across languages. This allows us to train languagespecific components individually (i.e. in parallel), and then only load the relevant modules at inference. Moreover, our approach also differs from the original CLSR by the positioning: supported by previous work (Pfeiffer et al., 2022; B. Zhang et al., 2021), we limit CLSR to the feed-forward network (correspondent to the feature domain of the Transformer architecture), which we also replace entirely by the CLSR module, reducing the increment in the number of parameters. \fDISTILWHISPER 23 Following the proposal from B. Zhang et al. (2021), each gate g(.) is made by a twolayer bottleneck network, which is summed to a increasing zero-mean Gaussian noise during training to discretize it: g(zl) = \u03c3(G(zl) + \u03b1(t) \u00b7 N(0, 1)), (4.2) with G(zl) = ReLU(zlW1 + w2), (4.3) where \u03c3(\u00b7) is the logistic-sigmoid function, and W1 and w2 are trainable parameters. \u03b1 is linearly increased along with training steps t. At inference time, we adopt hard gating: g(zl) = \u03b4(G(zl) \u22650), (4.4) where \u03b4(\u00b7) is a Dirac measure. 4.2 DistilWhisper approach Figure 13 presents our proposed DistilWhisper architecture. Our student is enriched with CLSR modules at each feed-forward for each language. These all experts in each CLSR layer are equally initialized from the frozen weights of the corresponding feed-forward layer. At training time, for each language the model updates only the corresponding language-specific experts and gates. At inference time, the model loads the shared layers (multilingual) and the Language-Specific experts and gates for the languages of interest, resulting in a limited parameter overhead. We highlight that the use of CLSR modules brings more flexibility to our architecture when compared to adapters, as it allows for routing at the token-level. This makes this approach more capable of leveraging pre-existing knowledge (shared frozen module), activating the Language-Specific path only when this is likely to increase performance. 4.3 DistilWhisper optimization The optimization of our DistilWhisper architecture consist of a standard cross-entropy loss, along with two new elements: gate budget loss, and knowledge distillation. Following we detail these new elements. 4.3.1 Gate budget loss Following B. Zhang et al. (2021), when learning CLSR module parameters, in addition to standard cross-entropy loss LCE, we optimize a gate budget loss Lg to balance \f24 DISTILWHISPER models\u2019 usage of language-specific and shared modules. It relies on the gate g(.) activation values for a pair (audio, text) (X, Y ) in a batch B, which is expressed by: G(X,Y ) = X x\u2208X X m\u2208Menc gm(x) + X y\u2208Y X m\u2208Mdec gm(y) (4.5) where Menc and Mdec are respectively the sets of encoders and decoders layers, and gm(.) = 1 when LS expert is selected in the layer m, or gm(.) = 0 otherwise. The average of this gate usage, representing the amount of language-specific experts used for the model in the batch, is constrained to a budget b. So the final gate budget loss is expressed by: Lg = \f \f \f \f \f P (X,Y )\u2208B G(X,Y ) P (X,Y )\u2208B(|X||Menc| + |Y ||Mdec|) \u2212b \f \f \f \f \f (4.6) For regularization, also it is used a skip gate probability (s), that randomly choose a proportion s of the gates to be closed (use only shared part) during training. 4.3.2 Knowledge Distillation For Knowledge Distillation (KD), following recent research (Go et al., 2023; Wen et al., 2023), we employ Jensen\u2013Shannon divergence (JS), whose loss is detailed in Eq 4.7: LKD = 1 2EY\u223cp h log p(Y) m(Y) i + 1 2EY\u2032\u223cq\u03b8 h log q\u03b8(Y\u2032) m(Y\u2032) i (4.7) where p is the teacher distribution, q\u03b8 is the student distribution, Y and Y\u2032 are sampled from the teacher\u2019s and student\u2019s distributions and compared with their average m(\u00b7) = 1 2p(\u00b7) + 1 2q\u03b8(\u00b7). 4.3.3 Final Learning Objective The final learning objective the leverages the dataset labels using cross-entropy loss LCE, but also enforces the use of a specific budget via gate budget loss Lg and mirrors the behavior of the teacher with the knowledge distillation loss LKD.Thus, CLSR modules parameters are learned to minimize final loss expressed as: L = LCE + Lg + \u03b2LKD (4.8) where \u03b2 is a constant defined based on the quality of the teacher, but can also be scheduled or learned (with the add of new constraints for its magnitude). \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 25 5 Experiments and Results on DistilWhisper In the former chapter we presented the DistilWhisper approach. In this chapter we present how we validate our architecture and the method as a whole, showing that our approach is able to outperform both classical fine-tuning and adapters on whisper-small, providing better generalization through light-weight ASR fine-tuning and Knowledge Distillation of the teacher model. Code and models produced in this studied will soon be made available on Hugging Face and Github. 5.1 Experimental Setup In this section we overview our validation setup, that includes choosing the data we use for training and evaluating models, as well as which languages and baselines to consider. We also discuss some code implementation details. 5.1.1 Datasets In order to validate the proposed architecture, we make use of a sample of two widely used massively-multilingual datasets: CommonVoice 13.0 and FLEURS. More details about these datasets are presented on Section 2.5. In our experiments, we applied downsampling to both the train and validation sets of CV-13, ensuring an equal allocation of training data for each selected language in each experiment. For our primary experiment, we employed 10,000 utterances for training (approximately 14 hours of audio data) and 1,000 for validation. Additionally, we explored variations in dataset size, using downsampled sets of 3,000 and 28,000 utterances in scalability experiments. The selection of data for downsampling was guided by the number of up-votes received by annotators. Notably, we did not apply downsampling to the test set. For most part of our experiments, FLEURS serves as an invaluable resource for conducting out-of-domain evaluations. It offers a favorable degree of language overlap with the CommonVoice 13.0 dataset (CV-13), making it a suitable choice for comparative analysis. Notably, FLEURS provides an effective out-of-domain setting in the context of ASR evaluation. For instance, while the average number of tokens per sample in CV-13 is 36, FLEURS exhibits a substantially higher average of 97 tokens per sample. \f26 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.1.2 Language Selection In this work we focus on bridging the performance gap for a subset of under-performing languages of the whisper-small model through light-weight ASR fine-tuning and Knowledge Distillation of the whisper-large-v2 model, as proposed in chapter 4. For validating our method, we consider all Whisper languages with a WER gap of more than 11 between large and small models on CV-13. For our validation experiments we then narrow this list considering: 1) minimum amount of 10k utterances; 2) an overlap with the FLEURS dataset for out-of-domain evaluation. For scalability experiments we loose the first requirement so we can include more diverse set of languages, considering a minimum amount of 3k utterances. We also experiment with the languages in a setting with 28k utterances. Resourcefulness ASR Train data (h) Languages per setting 3k 10k 28k High-resource [1000, 5000) ca, fi, id, pl ca, pl ca Mid-to-high-resource [500, 1000) uk, vi uk Low-to-mid-resource [100, 500) cs, hu, ro, th, ta cs, hu, th, ta ta, th Low-resource [10, 100) bg, hi, sk, sl Extremely Low-Resource (0, 10) gl gl Table 4 Languages used in the experiments for validation of DistilWhisper grouped by resourcefulness. The final list of languages is: Bulgarian (bg), Catalan (ca), Czech (cs), Finnish (fi), Galician (gl), Hindi (hi), Hungarian (hu), Indonesian (id), Polish (pl), Romanian (ro), Slovak (sk), Slovenian (sl), Tamil (ta), Thai (th), Ukranian (uk), and Vietnamease (vi).2 These languages belong to 7 distinct language sub-families and exhibit significant variation in terms of their representation within the Whisper training data. This variation extends from a substantial 4,300 hours for certain languages, such as Polish (pl), to a mere 9 hours for languages like Galician (gl). For a detailed overview of these languages and their distribution across the three dataset sizes (3k, 10k, 28k), categorized by their resourcefulness (following the classification proposed on Section 3.1.2), please refer to Table 4. Additionally, Table 5 organizes these languages into groups based on their respective sub-families. 2 Although Arabic would also qualify considering our criteria, we find that the dialect from FLEURS differs from the ones present on CV-13. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 27 Sub-families Languages per setting 3k 10k 28k Slavic (Indo-European) bg, cs, pl, sk, sl, uk cs, pl Romance (Indo-European) ca, gl, ro ca, gl ca Finno-Ugrian (Uralic) fi, hu hu Austroasiatic id, vi Dravidian ta ta ta Tai (Kra\u2013Dai) th th th Indo-Iranian (Indo-European) hi Table 5 Languages used in the experiments for validation of DistilWhisper grouped by language sub-families. 5.1.3 Models and Baselines In our evaluation, we assess our approach in comparison to several baseline models. These include the whisper-small model, serving as our pre-trained student and starting point, and the whisper-large-v2 model, acting as the teacher model, and ultimately, as the target goal. Additionally, we explore two fine-tuning (FT) approaches for the student model: standard fine-tuning, where all model weights are updated, and LoRA adaptation, which focuses on refining the feed-forward layer. Moreover, we delve into the effects of the Conditional Language-Specific Routing (CLSR) layer independently, without knowledge distillation (KD), referred to as CLSR-FT. This allows us to isolate the influence of KD from the impact of the CLSR layer on the model\u2019s overall robustness. 5.1.4 Implementation details We conducted our experiments using the Transformers library (Wolf et al., 2020) and leveraged the pre-trained weights of both whisper-small and whisper-large-v2 models, sourced from HuggingFace3 4. Unless where stated different, our training protocol consisted of ten epochs, utilizing a learning rate of 10\u22124 with linear decay, a one-epoch warm-up phase, a batch size of 16, and a label smoothing factor of 0.1. For LoRA adaptation, we tested two scenarios: 1) We first adopted the hyperparameters proposed by M. Wang et al. (2023), notably r = 32, which is the most commonly 3 https://huggingface.co/openai/ 4 https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013 \f28 EXPERIMENTS AND RESULTS ON DISTILWHISPER used for this type of adapters; 2) We increase the hidden dimension of the adapters to r = 64, so the size of the adapters are comparable to the Language-specific modules on DistilWhisper. In the case of training the CLSR, we set the gate budget (b) to 0.5 and the skip-gate probability (s) to 0.2. For knowledge distillation (KD), we employed the Jensen\u2013Shannon divergence (JS) with a temperature (\u03c4) of 1, unless when stated in contrary. This was weighted such that the learning objective (L) consisted of the cross-entropy loss (LCE), the gate loss (Lg), and twice the KD loss (2LKD): L = LCE + Lg + 2LKD. We reported the normalized Word Error Rate (WER) using the Whisper normalization method, with a slight modification to prevent the splitting of numbers and Latin-scripted text into individual characters in languages that do not employ space delimitation (e.g., Thai). Further details, including the modified normalization method, implementation scripts, and model weights, will soon be made available on GitHub and HuggingFace. Throughout our experiments, we selected the best-performing model based on its WER performance on the downsampled CV-13 validation set. 5.2 DistilWhisper versus other adaptation approaches Table 6 presents the results for our first experiment. The top portion presents whisper-large-v2 (upper bound) and whisper-small (lower bound) pre-trained scores, which should not be directly compared to the other adaptation techniques (middle and bottom), as these models were not trained on CV-13 (full out-of-domain setting). The middle portion presents standard fine-tuning (FT) and LoRA adaptation at the feed-forward layers (LoRA-FT). Our results are presented in the bottom: CLSR-FT corresponds to the setting without LKD, while DistilWhisper is the complete setting in which both CLSR and KD losses are leveraged. For whisper-small, we observe that both the standard fine-tuning method (FT) and the LoRA Adapters (LoRA-FT) approaches (middle portion of Table 6) demonstrate the capacity to enhance performance on the in-domain test set (CV-13). However, as anticipated, employing FT leads to a decline in performance on the out-of-domain test set, with an average increase of 1.6. This is likely attributed to catastrophic forgetting, resulting in a tendency to overly specialize in the specific domain. In contrast, LoRAFT represents a more lightweight adaptation technique that preserves the pre-trained representation. Remarkably, it exhibits improvements in performance on both the indomain (average decrease of 12.8) and out-of-domain (average decrease of 5.6) test sets when compared to whisper-small. Notably, experimenting with a larger hidden \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 29 Common voice 13.0 (in-domain for FT only) Model # params avg ca th ta hu cs pl gl uk whisper large-v2 1.5 B 14.9 16.9 9.3 17.3 18.6 14.5 8.1 19.0 15.6 whisper-small 244 M 31.4 30.1 20.3 30.1 45.5 38.6 18.8 35.7 32.3 +FT 244 M 22.0 19.0 10.9 17.3 30.4 29.2 21.4 19.3 28.8 +LoRA-FT (r=32) 256 M 18.6 15.7 9.2 15.3 30.5 25.0 15.4 12.8 24.8 +LoRA-FT (r=64) 267 M 18.6 15.5 9.2 15.5 30.6 25.2 15.4 13.0 24.6 +CLSR-FT 269 M 16.4 13.9 7.4 13.6 24.9 20.9 16.0 11.2 23.5 DistilWhisper 269 M 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 FLEURS (out-of-domain) Model # params avg ca th ta hu cs pl gl uk whisper large-v2 1.5 B 12.6 5.6 12.6 19.3 17.9 14.4 5.9 16.8 8.3 whisper-small 244 M 29.2 14.6 22.7 36.2 42.9 40.3 18.2 33.5 24.8 +FT 244 M 30.8 19.1 28.2 31.6 51.3 38.9 26.1 23.2 27.9 +LoRA-FT (r=32) 256 M 23.6 15.5 17.6 25.5 38.5 33.4 18.5 17.7 22.3 +LoRA-FT (r=64) 267 M 23.6 15.7 17.6 25.7 38.2 33.9 18.5 17.3 22.1 +CLSR-FT 269 M 23.6 15.5 15.7 23.2 37.6 31.2 22.9 16.9 25.9 DistilWhisper 269 M 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 Table 6 WER (\u2193) for the 10k setting with dataset averages (avg) for baselines (top), adaptation approaches (middle), and our method (bottom) for in-domain (CV-13, FT only) and out-of-domain (FLEURS, all) test sets.Best results for whisper-small in bold. dimension (r) for the LoRA adapters did not yield any perceptible improvement on the average. Our approach, DistilWhisper, yields notable enhancements in performance. When compared to whisper-small, it achieves a substantial improvement on in-domain data, with an average decrease of 15.3. This improvement is also evident when compared to LoRA-FT, where an average decrease of 2.2 is observed. Additionally, DistilWhisper exhibits superior adaptability in out-of-domain scenarios when contrasted with the original whisper-small, resulting in an average increase of 6.4. Furthermore, it demonstrates more effective out-of-domain adaptation capabilities in comparison to LoRA-FT, with an average increase of 0.8. We observe that both versions of our approach, with and without KD, successfully outperform all other adaptation approaches (FT, LoRAFT) for in-domain and out-of-domain in all languages but two (pl and uk) (bottom portion of Table 6). These findings highlights the robustness of our approach, showcasing that the proposed architecture with the addition of CLSR layers on Whisper provides a strong solution. Notably, all of these improvements are achieved with a mere 25 million parameter overhead during inference (10 % of the original model size). \f30 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.3 Impact of knowledge distillation In this analysis, we compare the two versions of our approach: one entails optimizing a lightweight CLSR-based architecture without Knowledge Distillation (CLSR-FT), while the other incorporates Knowledge Distillation loss (DistilWhisper). Across the examined languages, we observe some interesting trends. Firstly, when considering in-domain performance, as shown in Table 6, the DistilWhisper model exhibits a slightly increase in average performance of 0.3 on the WER. The performance is superior in all languages but Polish and Galician. However, when it comes to out-of-domain scenarios, DistilWhisper consistently outperforms CLSRFT across all languages, resulting in an average improvement of 0.8 on the WER. This observation confirms our initial hypothesis that the inclusion of Knowledge Distillation leverages the robustness imparted by the teacher model, preventing overspecialization in the CV-13 domain. Collectively, these results underscore the effectiveness of our proposed architecture. Notably, we managed to bridge the out-of-domain performance gap between large-v2 and small by a substantial 39%, reducing it from 16.6 to 10.2 (average decrease of 6.5). All of this was achieved with only a modest 10% parameter overhead during inference (25 million parameters). 5.4 DistilWhisper Scalability In the previous sections we showed that our architecture improves scores for both indomain and out-of-domain datasets, compared to other adaptation approaches. In this section we investigate the effectiveness of our method with respect to the amount of data available for training. For this, we select a subset of languages for which we find more training data available on CV-13 (ca, th, ta). Table 7 presents results for our approach in lower-resource training settings (3k utterances; approx. 4 hours), and higher-resource settings (28k utterances; approx. 40 hours). 10k results as well as the results for whisper-large-v2 and whisper-small are repeated from Table 6. We observe that, as expected, increasing the amount of trainable examples leads to superior ASR performance for both approaches, with the leveraging of KD (DistilWhisper) being consistently superior to CLSR-FT and getting closer to close the out-of domain performance gap. For the 28k setup, we are able to reduce the out-of-domain WER gap between whisper-large-v2 and whisper-small by 75.8%, from 12.0 to 2.9. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 31 FLEURS CV-13 Train FLEURS CV-13 (out-of-domain) (in-domain) size avg avg ca ta th ca ta th whisper large-v2 12.5 14.5 5.6 19.3 12.6 16.9 17.3 9.3 whisper-small 24.5 26.8 14.6 36.2 22.7 30.1 30.1 20.3 +LoRA-FT (r=64) 3k 22.7 17.0 17.7 28.6 21.7 19.4 19.0 12.5 +CLSR-FT 3k 20.4 15.2 17.8 25.4 17.9 19.2 16.7 9.7 DistilWhisper 3k 20.2 14.8 17.2 25.7 17.6 18.9 15.9 9.6 +LoRA-FT (r=64) 10k 19.7 13.4 15.7 25.7 17.6 15.5 15.5 9.2 +CLSR-FT 10k 18.1 11.6 15.5 23.2 15.7 13.9 13.6 7.4 DistilWhisper 10k 17.4 11.2 15.4 21.6 15.1 13.8 12.5 7.2 +LoRA-FT (r=64) 28k 17.2 11.1 13.6 23.0 15.1 12.5 13.5 7.3 +CLSR-FT 28k 15.6 9.7 13.5 19.6 13.8 11.5 11.3 6.2 DistilWhisper 28k 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 Table 7 WER (\u2193) for different training data sizes (3k, 10k, and 28k utterances) for both in-domain (CV-13) and out-of-domain (FLEURS) test sets. Best results in bold. Furthermore, our approach demonstrates commendable robustness in relation to the quantity of trainable examples. Even with as few as 3,000 utterances (equivalent to 4 hours of training data), we are able to reduce the WER performance gap by 35.8% in out-of-domain data. This suggests that our method holds promise in enhancing ASR performance for low-resource languages, where training data availability is limited. Across all three settings, our approaches consistently outperform LoRA Adapters by a significant margin. Additionally, it is worth noting that, in nearly all cases within these settings, the inclusion of knowledge distillation proved more beneficial than fine-tuning alone, reinforcing the findings discussed in Section 5.3. 5.5 Gate Activation Analysis To better understand how the model uses routing mechanism, we analyze gate activation statistics on the experiment discussed on Section 5.4 for both CLSR-FT and DistilWhisper. This results are presented on Figure 14. Firstly, we observe a tendency for the models to rely more heavily on the newly introduced Language-Specific modules in out-of-domain scenarios. This could be attributed to the greater complexity and larger sentence sizes prevalent in the FLEURS dataset. \f32 EXPERIMENTS AND RESULTS ON DISTILWHISPER 3k 10k 28k 30 40 50 LS Activation (%) Catalan 3k 10k 28k Thai FLEURS CV-13 3k 10k 28k T amil CLSR-FT DistilWisper Figure 14 Ratio of LS layers chosen by the models (CLSR-FT and DistilWhisper) depending on (1) amount of training data; (2) in (CV-13) or out-of-domain (FLEURS); (3) language. Also, as expected, enlarging the training dataset consistently results in more reliable Language-Specific modules, leading to increased utilization of these modules. The only exception for this is Thai at the 28k setup with CLSR-FT, and this might be due to dataset quality and requires further investigation The comparison of the three languages reveals that Catalan displays a notably higher reliance on Language-Specific routes. This characteristic might be linked to the superior data quality available for Catalan in CV-13, where a substantial number of contributors have contributed to the dataset. Also, the distilled version uses more LS modules, probably because the teacher whisper-large-v2 is a really good model for this language. Now for languages with a weaker teacher (Thai, Tamil) we observe that the model may receive contradictory signals at lower-resource settings (3k, 10k), leading to less Language-Specific routing usage with Knowledge Distilation. However, in the higher resource setting (28k), KD usage leads systematically to more reliable LanguageSpecific module and therefore higher LS routing. Finally, we observe a common trend across the three languages models tend to employ more Language-Specific routes when learning with Knowledge Distillation (DistilWhisper vs. CLSR-FT). This suggests that KD imparts valuable information and enhances the out-of-domain generalization capabilities of the learned Language-Specific representation. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 33 Common voice 13.0 (in-domain for FT only) Model avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 17.0 19.9 16.9 14.5 14.4 19.0 24.6 18.6 8.5 8.1 15.8 31.9 20.6 17.3 9.3 15.6 whisper-small 34.2 44.8 30.1 38.6 30.5 35.7 43.6 45.5 22.5 18.8 33.2 42.0 45.5 30.1 20.3 32.3 +CLSR-FT 22.9 26.1 19.2 25.7 25.1 15.3 18.8 31.6 19.2 18.3 23.4 36.6 28.6 16.7 9.7 29.5 DistilWhisper 22.6 25.9 18.9 26.2 24.8 14.7 18.3 31.0 18.6 18.6 21.5 36.8 27.7 15.9 9.6 30.0 FLEURS (out-of-domain) Model avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 13.7 14.6 5.6 14.4 9.7 16.8 23.8 17.9 7.1 5.9 14.4 11.7 23.1 19.3 12.6 8.3 whisper-small 32.8 39.9 14.6 40.3 26.8 33.5 47.9 42.9 18.6 18.2 34.6 35.8 54.5 36.2 22.7 24.8 +CLSR-FT 29.2 43.8 17.8 35.4 33.7 19.8 22.8 40.1 19.0 21.9 33.4 35.3 50.8 25.4 17.9 21.6 DistilWhisper 29.2 42.8 17.2 35.6 32.0 18.7 21.8 41.1 19.1 21.9 33.1 35.2 50.5 25.7 17.6 25.9 FLEURS (out-of-domain) CV-13 (in-domain) High Mid-to-high Low-to-mid Low Extremely Low High Mid-to-high Low-to-mid Low Extremely Low whisper large-v2 7.1 8.3 15.7 18.3 16.8 12.0 15.6 15.1 24.3 19.0 whisper-small 19.6 24.8 35.3 44.5 33.5 25.5 32.3 33.5 44.0 35.7 +CLSR-FT 23.1 21.6 30.4 38.2 19.8 20.4 29.5 21.4 27.5 15.3 DistilWhisper 22.5 25.9 30.6 37.6 18.7 20.2 30.0 20.8 27.2 14.7 Table 8 WER (\u2193) for the 3k setting with dataset averages (avg) for baselines (top), and our method (bottom) for in-domain (CV-13, FT only higher portion) and out-of-domain (FLEURS, all middle portion) test sets. On the lower portion, the same results are grouped by resourcefulness. Best results for whisper-small in bold. \f34 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.6 Considerations on the Resourcefulness Our observations so far indicate that both versions of our approach, with and without knowledge distillation (KD), demonstrate consistent outperformance over all other adaptation methods (FT and LoRA-FT). This improvement holds true for both in-domain and out-of-domain scenarios across all languages, with only two exceptions on the 10k setting (Polish and Ukrainian), as indicated in the lower portion of Table 6. The challenges encountered in these two languages can be attributed to their higher resource status, with Polish being a high-resource language and Ukrainian categorized as midto-high resource, as detailed in Table 4. In order to deepen this analysis, we conducted experiments across a broader range of languages, widening to those with a minimum of 3,000 utterances available for training. The outcomes of these experiments are presented in Table 8, where we have also aggregated the results into resourcefulness clusters (in the lower portion) based on the classification provided in Table 4. Examining the results, we observed that more substantial out-of-domain improvements are seem in languages with lower resource availability (Low-to-mid, Low and Extremely low-resource clusters). This aligns with the initial motivation behind our work, which aimed to address the curse of multilinguality. We expect that lower resource languages experience a more significant impact from this phenomenon during the pre-training of whisper-small. Consequently, they significantly benefit more from the integration of language-specific modules in the feature domain. In contrast, for languages with higher resource availability, further enhancements may be necessary, such as adjustments to attention weights (corresponding to the time domain). This is due to the fact that the original model already performs reasonably well. Additionally, achieving better out-of-domain performance may require a larger training dataset. This is exemplified by the case of Catalan presented in Table 7. In this case, CLSR modules yielded superior performance than original whisper-small only in the case trained with 28,000 utterances, losing to its starting point for 3,000 and 10,000 training utterances. 5.7 Effect of temperature and distillation loss In this set of experiments, our goal is to examine the impact of the chosen distillation optimization on the results. We start by exploring the effect of temperature. Temperature plays a crucial role in determining the learning behavior of the model. A lower temperature, like 1, tends to make the learning focus primarily on replicating the first \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 35 option from the teacher\u2019s logits for each token. Conversely, a higher temperature, such as 3 or 4, encourages the learning to take into account the other options, thereby mitigating the cost from incorrect predictions. However, this approach may lead to over-smoothing of the distribution and a reduced ability to effectively rank similar logits. Common voice 13.0 (in-domain) avg ca th ta hu cs pl gl uk JS w/ \u03c4 = 1 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 JS w/ \u03c4 = 3 16.3 14.1 7.5 13.1 23.5 21.1 16.2 11.6 23.6 FLEURS (out-of-domain) avg ca th ta hu cs pl gl uk JS w/ \u03c4 = 1 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 JS w/ \u03c4 = 3 23.4 17.0 15.6 21.5 36.0 31.4 22.4 16.8 26.2 Table 9 WER (\u2193) for the 10k setting with dataset averages (avg) for JS loss with temperatures 1 and 3, for in-domain (CV-13, FT only higher portion) and out-ofdomain (FLEURS, all lower portion) test sets. Best results in bold. Tables 9 and 10 present the results of comparing different temperatures (1 or 3) with the Jensen\u2013Shannon loss for both the 10k and 28k settings. These results reveal that using a temperature of 1 generally results in improved in-domain and out-of-domain performance compared to a temperature of 3. However, for Tamil and Hungarian, temperature 3 showed better out-of-domain performance. These results suggest that whisper-large-v2 serves as an effective teacher, justifying the use of a temperature of 1. Nevertheless, the optimal temperature value may vary depending on the quality of the teacher model for each specific language. FLEURS CV-13 FLEURS (out-of-domain) CV-13 (in-domain) avg avg ca ta th ca ta th JS w/ \u03c4 = 1 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 JS w/ \u03c4 = 3 16.3 9.7 14.8 20.1 14.1 11.8 11.3 5.9 KL w/ \u03c4 = 1 15.6 10.8 14.6 18.7 13.3 14.9 11.3 6.2 KL w/ \u03c4 = 3 16.5 9.7 15.8 19.8 14.0 12.2 11.1 5.9 Table 10 WER (\u2193) for different training data sizes (3k, 10k, and 28k utterances) for JS and KL losses for temperatures 1 and 3 for both in-domain (CV-13) and out-ofdomain (FLEURS) test sets. Best results in bold. \f36 EXPERIMENTS AND RESULTS ON DISTILWHISPER Table 10 also compares the use of the Jensen\u2013Shannon (JS) loss with the traditional Kullback\u2013Leibler (KL) loss discussed in Section 2.4, specifically for the 28k setting. Once again, the results favor a temperature of 1 in both cases, with a slight advantage for the JS loss against KL, primarily driven by Catalan out-of-domain performance. This advantage is more pronounced in in-domain performance. These findings indicate the presence of the mode-averaging problem introduced in Section 2.4, although they are not definitive. They raise questions about whether these behaviors change when working with larger or smaller fine-tuning datasets and different levels of language resourcefulness. Unfortunately, due to time constraints, we could not explore these aspects in this study, leaving them as potential directions for future research. 5.8 Multi-domain training In our final experiment, we delve into the impact of incorporating the train split of FLEURS dataset into our training data in the previously explored settings. The objective here is to use the validated architecture to generate models that would be more beneficial to the scientific community. In real-world scenarios, the models developed here are likely to be utilized in domains other than FLEURS or CV-13, so the hypothesis is that training on more than one dataset yields a better model. Common voice 13.0 Model Train data avg ca th ta hu cs pl gl uk whisper large-v2 14.9 16.9 9.3 17.3 18.6 14.5 8.1 19.0 15.6 whisper-small 31.4 30.1 20.3 30.1 45.5 38.6 18.8 35.7 32.3 DistilWhisper CV10k 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 +CLSR-FT CV10k + F 15.5 15.1 6.8 12.4 21.9 18.4 16.3 11.3 22.2 DistilWhisper CV10k + F 14.6 13.2 6.4 11.6 21.6 15.3 15.8 11.2 21.6 FLEURS Model Train data avg ca th ta hu cs pl gl uk whisper large-v2 12.6 5.6 12.6 19.3 17.9 14.4 5.9 16.8 8.3 whisper-small 29.2 14.6 22.7 36.2 42.9 40.3 18.2 33.5 24.8 DistilWhisper CV10k 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 +CLSR-FT CV10k + F 17.2 11.8 10.1 16.0 28.1 23.2 17.1 12.9 18.7 DistilWhisper CV10k + F 16.7 11.9 9.4 14.6 27.7 22.1 17.7 12.7 17.3 Table 11 WER (\u2193) for the setting trained with 10k from CV-13 and FLEURS with dataset averages (avg) for baselines (top), adaptation approaches (middle), and our method (bottom) CV-13 and FLEURS test sets (both in-domain). Best results for whisper-small in bold. Table 11 showcases the outcomes of training the model in a setting involving 10k sentences from CV-13 along with the entire FLEURS train split. In this setting, we once \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 37 again experiment with CLSR fine-tuning. For reference, the table also presents results from section 5.2. The results reaffirm better performance for the setting with Knowledge Distillation compared to CLSR-FT. More significantly, the results demonstrate a substantial improvement within the domain when FLEURS is incorporated as part of the training dataset. Training with FLEURS reduces the WER on CV-13 by 1.5. This improvement is likely due to FLEURS\u2019 greater sentence complexity and larger average token count per line, contributing to enhanced training data diversity. In table 12, we repeat the same experiment using settings with 3k and 28k sentences from CV-13, both added to the full FLEURS dataset. The results allow us to draw the same conclusions: the addition of out-of-domain training data (FLEURS) results in superior in-domain generalization on CV-13. Nevertheless, it is evident that the size of the training data remains a limiting factor, as CV3k+F (approximately 6k sentences) was insufficient to surpass CV10k alone, and similarly for CV10k+F (around 13k sentences) in comparison to CV28k alone. In this section, we have presented the best models attainable for each setting using these two datasets. These models will be made open-source, and we hope they contribute to the development of speech recognition applications in these languages. \f38 EXPERIMENTS AND RESULTS ON DISTILWHISPER Common voice 13.0 Train data avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 17.0 19.9 16.9 14.5 14.4 19.0 24.6 18.6 8.5 8.1 15.8 31.9 20.6 17.3 9.3 15.6 whisper-small 34.2 44.8 30.1 38.6 30.5 35.7 43.6 45.5 22.5 18.8 33.2 42.0 45.5 30.1 20.3 32.3 DistilWhisper CV3k 22.6 25.9 18.9 26.2 24.8 14.7 18.3 31.0 18.6 18.6 21.5 36.8 27.7 15.9 9.6 30.0 DistilWhisper CV3k + F 19.3 21.8 15.0 21.7 22.4 14.2 15.8 26.4 17.0 17.2 18.0 29.3 22.9 13.4 7.8 27.0 FLEURS Train data avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 13.7 14.6 5.6 14.4 9.7 16.8 23.8 17.9 7.1 5.9 14.4 11.7 23.1 19.3 12.6 8.3 whisper-small 32.8 39.9 14.6 40.3 26.8 33.5 47.9 42.9 18.6 18.2 34.6 35.8 54.5 36.2 22.7 24.8 DistilWhisper CV3k 29.2 42.8 17.2 35.6 32.0 18.7 21.8 41.1 19.1 21.9 33.1 35.2 50.5 25.7 17.6 25.9 DistilWhisper CV3k + F 18.3 21.0 12.3 24.2 19.7 13.9 13.5 29.0 13.0 16.6 21.4 19.4 27.5 15.1 10.3 18.1 Training FLEURS CV-13 FLEURS CV-13 Data avg avg ca ta th ca ta th whisper large-v2 12.5 14.5 5.6 19.3 12.6 16.9 17.3 9.3 whisper-small 24.5 26.8 14.6 36.2 22.7 30.1 30.1 20.3 DistilWhisper CV28k 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 DistilWhisper CV28k + F 11.4 9.0 10.8 14.2 9.4 10.9 10.5 5.6 Table 12 WER (\u2193) for the 3k setting with dataset averages (avg) for baselines (top), and our method (bottom) for in-domain (CV-13, FT only higher portion) and out-of-domain (FLEURS, all middle portion) test sets. On the lower portion, the same results are grouped by resourcefulness. Best results for whisper-small in bold. \fCONCLUSION 39 6 Conclusion This internship focused on investigating bias on Whisper, a family of large speech models, specifically examining speaker-related (gender, age, accent) and model-related (model size, resourcefulness, similar languages) biases. Additionally, we explored whether these biases are mitigated or exacerbated by quantization and proposed an alternative compression approach. Our findings revealed that Whisper exhibits both speaker-related and model-related biases. Speaker-related biases are kept unchanged after quantization, while modelrelated biases are amplified by this compression technique. Low-resource languages are particularly more affected, and smaller models experience significant performance degradation. This is concerning because current parameter-efficient approaches typically apply quantization uniformly across models, introducing unintended bias. To address this challenge, we introduced DistilWhisper, a parameter-efficient distillation approach that enhances the performance of whisper-small by transferring the robustness of whisper-large-v2 into a smaller model. This is achieved by incorporating language-specific gated modules and jointly optimizing ASR fine-tuning and knowledge distillation losses. Our results consistently showed performance improvements across various languages and test sets, with minimal parameter increase during inference. We believe this approach will democratize the use of Whisper models, making them accessible to a wider audience of researchers and practitioners. This approach was organized as a paper submitted to the conference ICASSP 2024 (Ferraz et al., 2024). Code and models produced in this study will be made available soon on Hugging Face and Github. 6.1 Future Work There are several promising directions for future research in this area. Firstly, it would be beneficial to expand upon the analysis presented in Chapter 3, including an investigation into other quantization methods, such as 4-bit quantization. Exploring these methods across various model families would help determine if the conclusions drawn here are applicable more broadly. This could present an important contribution to the community and ensure the correct usage of these techniques. Additionally, further research into the DistilWhisper approach could yield valuable insights. Examining the effects of several hyperparameters, such as gate budget, KD loss weight, and temperature, would provide a deeper understanding of the approach\u2019s \f40", + "additional_graph_info": { + "graph": [ + [ + "Thomas Palmeira Ferraz", + "Marcely Zanon Boito" + ], + [ + "Thomas Palmeira Ferraz", + "Vassilina Nikoulina" + ], + [ + "Thomas Palmeira Ferraz", + "Alexandre Alcoforado" + ], + [ + "Marcely Zanon Boito", + "Laurent Besacier" + ], + [ + "Marcely Zanon Boito", + "Fethi Bougares" + ], + [ + "Vassilina Nikoulina", + "Maxat Tezekbayev" + ], + [ + "Vassilina Nikoulina", + "Zhenisbek Assylbekov" + ] + ], + "node_feat": { + "Thomas Palmeira Ferraz": [ + { + "url": "http://arxiv.org/abs/2405.00966v1", + "title": "Efficient Compression of Multitask Multilingual Speech Models", + "abstract": "Whisper is a multitask and multilingual speech model covering 99 languages.\nIt yields commendable automatic speech recognition (ASR) results in a subset of\nits covered languages, but the model still underperforms on a non-negligible\nnumber of under-represented languages, a problem exacerbated in smaller model\nversions. In this work, we examine its limitations, demonstrating the presence\nof speaker-related (gender, age) and model-related (resourcefulness and model\nsize) bias. Despite that, we show that only model-related bias are amplified by\nquantization, impacting more low-resource languages and smaller models.\nSearching for a better compression approach, we propose DistilWhisper, an\napproach that is able to bridge the performance gap in ASR for these languages\nwhile retaining the advantages of multitask and multilingual capabilities. Our\napproach involves two key strategies: lightweight modular ASR fine-tuning of\nwhisper-small using language-specific experts, and knowledge distillation from\nwhisper-large-v2. This dual approach allows us to effectively boost ASR\nperformance while keeping the robustness inherited from the multitask and\nmultilingual pre-training. Results demonstrate that our approach is more\neffective than standard fine-tuning or LoRA adapters, boosting performance in\nthe targeted languages for both in- and out-of-domain test sets, while\nintroducing only a negligible parameter overhead at inference.", + "authors": "Thomas Palmeira Ferraz", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SD", + "eess.AS" + ], + "main_content": "Introduction 1.1 Motivation Over the past three years, the field of Natural Language Processing (NLP) has been revolutionized by the introduction of large pre-trained models, often referred to as \"foundation models.\" These models, both for text and speech, are trained on vast amounts of unlabeled data and can subsequently be fine-tuned for specific tasks using limited labeled data. Multilingual foundation models have garnered significant attention due to their ability to handle hundreds of languages within a single model. However, they face a challenge known as the curse of multilinguality: in order to maintain high performance across all supported languages, these models require an increase in the number of parameters, leading to larger memory requirements and slower inference times. This can render the use of such models impractical in certain scenarios. To address this issue, research has been conducted on model compression techniques, although these methods may inadvertently exacerbate biases present in the model. This internship focuses on OpenAI\u2019s Whisper, a family of multilingual multi-task speech models known for their impressive performance in speech recognition. These models exhibit robustness when transcribing speech recorded under various conditions, surpassing the capabilities of previous models. However, there remain important questions to explore regarding Whisper and its multitask learning approach. Although the model presents exceptional capability for transcribing and translating English, its performance in other languages indicates a decline in multilingual capabilities as the model size decreases. Additionally, we aim to investigate how this multilingual architecture handles biases related to different speakers, including gender, age, and accent. These questions drive our research to enhance the understanding of Whisper\u2019s capabilities and limitations. 1.2 Internship Objectives This internship has three main objectives: (1) Conduct a comprehensive analysis of bias within the Whisper model family, with a specific focus speaker-related (gender, age, accent) and modelrelated (model size, resourcefulness, similar languages) biases; \f2 INTRODUCTION (2) Explore how light compression techniques, such as quantization, may either mitigate or exacerbate any identified biases within the Whisper models; (3) Propose a better compression approach that effectively reduces any disparities found in the models. 1.3 Contributions of this work This work offers two significant contributions. Firstly, it provides a comprehensive analysis of the biases present in the Whisper model and examines how quantization impacts these biases. Secondly, it introduces an alternative model compression method called DistilWhisper, which enhances the performance of smaller Whisper models. Additionally, all models and code developed in this research will be made available as open-source resources. The structure of this report is as follows: Chapter 2 provides essential fundamentals and a comparison with related work to establish a foundational understanding. Chapter 3 details the experimental setup and results of the investigation into bias when quantizing Whisper. This investigation leads to the proposal of DistilWhisper, in Chapter 4, a novel parameter-efficient distillation approach that leverages small pre-trained models. Chapter 5 covers the validation of the proposed approach, as well as some interesting analysis. Finally, Chapter 6 summarizes the primary findings and conclusions of this work. 1.4 About NAVER LABS Europe NAVER LABS is the R&D subsidiary of NAVER, Korea\u2019s leading internet company and the part of NAVER responsible for creating future technology. Its world-class researchers in Korea and Europe create new connections between people, machines, spaces and information by advancing technology in AI, robotics, autonomous driving, 3D/HD mapping and AR. NAVER LABS Europe is the biggest industrial research lab in artificial intelligence in France and a hub of NAVER\u2019s global AI R&D Belt, a network of centers of excellence in Korea, Japan, Vietnam, USA & Europe. The scientists at NAVER LABS Europe conduct fundamental and applied research in machine learning (optimization, robotics), computer vision, natural language processing and UX and ethnography. The site is located in Grenoble, France. \fBACKGROUND AND RELATED WORK 3 2 Background and Related Work 2.1 State of the Art for Automatic Speech Recognition Current ASR approaches primarily involve adapting pre-trained Transformer stacks (Vaswani et al., 2017), which are initially trained through self-supervised learning (SSL) on unlabeled audio data. These pre-trained models can vary in their use of pre-text tasks (e.g., wav2vec 2.0 (Baevski et al., 2020), HuBERT (Hsu et al., 2021), WavLM (Chen et al., 2022)) and the range of languages they cover (e.g., XLSR-53 (Conneau et al., 2021), XLS-R (Babu et al., 2022), MMS (Pratap et al., 2023), Google-USM (Y. Zhang et al., 2023)). This development of models has also seen the introduction of monolingual and multilingual SSL benchmarks. Examples of such benchmarks include SUPERB for English (Yang et al., 2021), LeBenchmark (Evain et al., 2021) for French, and ML-SUPERB (Shi et al., 2023), which covers 143 languages. In contrast to this line of research, the Whisper model relies on weak supervision, meaning it is trained solely on weakly labeled data (without self-supervision). Nevertheless, with an ample amount of data, the Whisper model achieves competitive results when compared to monolingual (Gandhi et al., 2022; Radford et al., 2023) and multilingual (Pratap et al., 2023) SSL models. More details about Whisper can be found on Section 2.6. For broader ASR benchmarks, facilitating comparisons between SSL pretraining and multitasking weakly-supervised training, the ESB benchmark from HuggingFace (Gandhi et al., 2022) for English is an illustrative example. 2.2 Domain Adaptation Domain adaptation consist in the process of adapting a pre-existing trained model to a new domain or task with minor weight adjustments, rather than retraining the entire model from scratch. In the past, this adaptation was primarily carried out through full fine-tuning, where all the model\u2019s weights were updated. In the case of Transformerbased models, it is also common to proceed adaptation choosing to update only specific layers, usually the final ones (Laskar et al., 2022). More recently, the practice of domain adaptation has seen the emergence of Adapterbased techniques, initially proposed by Houlsby et al. (2019). Adapters are lightweight modules commonly used in both NLP and Speech to adapt pre-trained models to new tasks or domains. In speech-related tasks, Adapter-based fine-tuning has found applications in speech translation (Antonios et al., 2022; Gow-Smith et al., 2023; Le et al., 2021), domain adaptation (Thomas et al., 2022; Tomanek et al., 2021), and other \f4 BACKGROUND AND RELATED WORK tasks. They have demonstrated comparable performance to standard fine-tuning while utilizing only a fraction of trainable parameters. Furthermore, there are efforts to adapt Whisper models to specific tasks using LoRA adapters (e.g. Arabic dialect identification (Radhakrishnan et al., 2023), spoken language understanding (M. Wang et al., 2023), emotion recognition (Feng & Narayanan, 2023)). This technique is elaborated in Section 2.2.1. Additionally, some work involves full fine-tuning for task adaptation (e.g child spoken language understanding (Jain et al., 2023)). In contrast to adapters and full fine-tuning, our work introduces gated Language-specific layers into the Whisper model and presents a parameter-efficient Knowledge Distillation approach. These innovations enhance the model\u2019s robustness to out-of-domain data. 2.2.1 Low-rank Adapters (LoRA) Low-rank Adapter (LoRA) fine-tuning, as proposed by Hu et al. (2022), is a technique designed to reduce memory requirements for domain adaptation. This is achieved by introducing new trainable parameters into a pre-trained neural network while keeping the original pre-trained model weights fixed. These introduced parameters take the form of trainable rank decomposition matrices, and they are inserted between specific layers or blocks of the model. This approach significantly reduces the number of parameters that need to be fine-tuned when adapting the model for specific downstream tasks. For example, when fine-tuning a multilingual multi-task model for a single language and task, LoRA adapters help streamline the adaptation process. The key assumption behind LoRA is that weight matrix updates in Transformer-based models exhibit a low \"intrinsic rank\" when undergoing full fine-tuning. This means that a pre-trained weight matrix, denoted as W0 \u2208Rd\u00d7k, can be effectively represented using a low-rank matrix decomposition, denoted as W0 + \u2206W = W0 + BA, where B \u2208Rd\u00d7r, A \u2208Rr\u00d7k, and the rank r \u226amin(d, k). Importantly, during LoRA fine-tuning, the W0 part remains fixed (frozen) and does not receive gradient updates, while A and B become sets of trainable parameters. h = W0x + \u2206Wx = W0x + BAx (2.1) One significant advantage of this approach is that it allows for parallel computation during the forward pass. Specifically, the forward pass output h can be efficiently computed \fBACKGROUND AND RELATED WORK 5 in parallel, and then the partial results are summed coordinate-wise, as presented in Equation 2.1. 2.3 Quantization Quantization is a well-established technique in the field of Deep Learning, employed to increase the efficiency of neural networks. Historically, neural networks were often trained using low-precision numerical representations (Hubara et al., 2017). However, a recent trend, particularly in NLP , involves post-training quantization. This technique entails applying quantization to models after they have been trained with regular, higher precision. This approach has gained traction as it offers the dual benefits of reducing inference latency and model size. Post-training quantization has found widespread use in various domains, including machine translation and language models (Bondarenko et al., 2021; Liang et al., 2021; Menghani, 2023; Wu et al., 2020). Quantized NLP models have yielded promising results, making it an appealing approach. One of the most widely adopted techniques for post-training quantization in both NLP and speech communities is the LLM.int8() algorithm (Dettmers et al., 2022). This method implements quantization in the feed-forward and attention projection layers of the Transformer architecture. The method has two parts: vector-wise quantization and mixed precision decomposition. In the vector-wise quantization, it is determined conversion constants that allow for the recovery of original numbers from 8-bit to 16-bit floating-point representations. This enables matrix multiplication to be carried out in the lower 8-bit precision. Moreover, in the mixed precision decomposition, it identifies potential outliers that could be adversely impacted by reduced precision and then executes this part of the matrix multiplication in 16-bit precision. While initially designed for decoder-only large language models (LLMs), this quantization method, along with its 4-bit variation (Dettmers & Zettlemoyer, 2023), has gained widespread adoption for various Transformer-based models. It has been made readily available in the Transformers library by Hugging Face (Wolf et al., 2020), contributing to its popularity. Additionally, it is becoming common to combine this quantization technique with domain adaptation methods. For instance, the QLoRA (Dettmers et al., 2023) method incorporates LoRA adapters on top of a quantized Transformer model. \f6 BACKGROUND AND RELATED WORK 2.4 Knowledge Distillation Knowledge distillation (KD) has been initially proposed by Hinton et al. (2015) to distill knowledge from ensemble of models into a single model. Over time, KD has evolved to distill knowledge from a large teacher model into smaller student models (Mohammadshahi et al., 2022; Sanh et al., 2020; Shen et al., 2023). Knowledge distillation can be approached in two primary ways: representation matching or distribution matching. In this work, our focus is on distribution matching. Traditional distribution matching knowledge distillation methods involves minimizing the Kullback\u2013Leibler (KL) divergence between a teacher model and a student model. This is mathematically represented by Equation 2.2: JKL = DKL(p\u2225q\u03b8) = EY\u223cp \u0014 log p(Y) q\u03b8(Y) \u0015 (2.2) where p is the teacher distribution, q\u03b8 is the student distribution, and Y is sampled from the teacher distribution. However, learning based on KL divergence at the sequence level can often lead to the student distribution becoming overly smooth, as it attempts to cover the entire support of the teacher distribution. This behavior arises due to the asymmetric nature of the KL divergence, a phenomenon sometimes referred to as the mode-averaging problem, as demonstrated by (Wen et al., 2023). Recent research (Go et al., 2023; Wen et al., 2023) have shown that symmetric divergences, such as the Jensen-Shannon (JS) divergence, exhibit fewer borderline behaviors and tend to yield improved results in sequence-level distillation. Traditional JS divergence is expressed in Equation 2.3: JJS = DJS(p\u2225q\u03b8) = 1 2EY\u223cp h log p(Y) m(Y) i + 1 2EY\u2032\u223cq\u03b8 h log q\u03b8(Y\u2032) m(Y\u2032) i (2.3) where p is the teacher distribution, q\u03b8 is the student distribution, Y and Y\u2032 are sampled from the teacher\u2019s and student\u2019s distributions and compared with their average m(\u00b7) = 1 2p(\u00b7) + 1 2q\u03b8(\u00b7). 2.5 Datasets for Multilingual ASR Here we present two widely used massively-multilingual datasets that will be used in this work: CommonVoice 13.0 and FLEURS. \fBACKGROUND AND RELATED WORK 7 2.5.1 CommonVoice 13.0 The CommonVoice 13.0 (CV-13) corpus (Ardila et al., 2020), represents the latest iteration of a massively multilingual collection of transcribed speech. It serves as a valuable resource for research and development in the field of speech technology. While primarily designed for Automatic Speech Recognition (ASR) applications, this dataset also finds utility in other domains, such as language identification. The utterances comprising this dataset are sourced from Wikipedia articles and supplemented with utterances contributed by language communities. These are subsequently narrated by contributors through Mozilla\u2019s website or iPhone app. To ensure data quality, contributions undergo validation by other volunteers, with only validated data being incorporated into the train, validation, and test subsets splits of the dataset. As of the current version, the dataset encompasses a rich tapestry of 110 languages, though the number of utterances per language varies significantly. 2.5.2 FLEURS The FLEURS (Conneau et al., 2023) is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark (Goyal et al., 2022), with approximately 12 hours of speech supervision per language. It was meant for few-shot learning on a variety of speech tasks, including Automatic Speech Recognition, Speech Language Identification, Speech Translation and Retrieval. The creation of this dataset involved the recording of all the publicly available sentences from FLoRes-101 (from dev and devtest split subsets). Each sentence was recorded by three paid native-speaker experts per language. Subsequently, these spoken sentences underwent a thorough evaluation by paid evaluators to ensure the overall quality and accuracy of the recorded content. The dataset is unbalanced as not all the sentences were validated, but most part of the languages have between 2400 and 3300 utterances on the train split, with an average 12 seconds per audio sample. 2.6 The Whisper Model In this section we present Whisper (Radford et al., 2023), the base model for the studies conducted in this work. \f8 BACKGROUND AND RELATED WORK \u22ef \u22ef 2\u00a0\u00d7 Conv1D + GELU \u22ee cross attention Log-Mel Spectrogram ~ SOT EN TRANSCRIBE 0.0 The quick Tokens in Multitask Training Format Transformer Encoder Blocks Transformer Decoder Blocks EN 0.0 The quick brown \u22ee \u22ee next-token prediction Sinusoidal Positional Encoding Learned Positional Encoding Multitask training data (680k hours) Sequence-to-sequence learning Multitask training format English transcription Any-to-English speech translation Non-English transcription No speech \ud83d\udde3\ufe0f \u00a0 \u201cAsk not what your country can do for \u22ef\u201d \ud83d\udcdd\u00a0\u00a0Ask not what your country can do for \u22ef \ud83d\udde3\ufe0f \u00a0 \u201cEl r\u00e1pido zorro marr\u00f3n salta sobre \u22ef\u201d \ud83d\udcdd\u00a0 The quick brown fox jumps over \u22ef \ud83d\udde3\ufe0f \u00a0\u201c\uc5b8\ub355 \uc704\uc5d0 \uc62c\ub77c \ub0b4\ub824\ub2e4\ubcf4\uba74 \ub108\ubb34\ub098 \ub113\uace0 \ub113\uc740\u00a0\u22ef\u201d \ud83d\udcdd\u00a0\u00a0\uc5b8\ub355 \uc704\uc5d0 \uc62c\ub77c \ub0b4\ub824\ub2e4\ubcf4\uba74 \ub108\ubb34\ub098 \ub113\uace0 \ub113\uc740\u00a0\u22ef \ud83d\udd0a\u00a0(background music playing) \ud83d\udcdd\u00a0 \u2205 PREV special tokens text tokens timestamp tokens START OF TRANSCRIPT LANGUAGE TAG NO SPEECH EOT TRANSCRIBE TRANSLATE begin time NO TIMESTAMPS \u22ef end time text tokens begin time end time text tokens text tokens Voice activity detection (VAD) Custom vocabulary / prompting Time-aligned transcription Text-only transcription (allows dataset-specific fine-tuning) X\u00a0\u2192 English Translation\u00a0 previous text tokens X\u00a0\u2192 X Transcription\u00a0 Language identification MLP self attention MLP self attention MLP self attention MLP cross attention self attention MLP cross attention self attention MLP cross attention self attention TRANSCRIBE Figure 1 The Whisper model architecture (Source: Radford et al. (2023)) 2.6.1 Overview Whisper is designed to serve as a versatile end-to-end Automatic Speech Recognition (ASR) model suitable for a wide range of applications and languages. When it comes to ASR, previous research has predominantly focused on two key approaches: large-scale Unsupervised Learning (Y. Wang et al., 2022) and Supervised Learning as discussed in Section 2.1. In the case of large-scale Unsupervised Learning, models benefit from training on vast, low-cost, and unlabeled datasets, which helps in building a high-quality encoding component. However, these models generate output that is not directly usable for ASR applications and requires further fine-tuning. On the other hand, Supervised Learning approaches utilize pretrained models that can be directly used for ASR tasks. However, they often struggle to generalize when faced with shifts in the data distribution, primarily due to the limited size of the datasets they were originally trained on. Additionally, creating large-scale human labeled datasets for these models can be prohibitively expensive. \fBACKGROUND AND RELATED WORK 9 Whisper takes a unique approach by introducing Weakly Supervised Learning, striking a balance between data quality and quantity. The Whisper training dataset is curated by collecting pairs of audio and corresponding transcripts from the internet (mainly YouTube videos). After some minimal processing, that included employing language identification with the model proposed by Valk and Alum\u00e4e (2021), this dataset comprises a substantial 680, 000 hours of highly diverse audio content. Notably, it encompasses 96 languages besides English, with approximately 17.2% of the dataset consisting of audio and transcript pairs in the same language (ASR). Additionally, around 18.4% of the pairs have English-translated transcripts. This unique approach provides Whisper with several advantages. Firstly, the Whisper encoder benefits from the rich and diverse dataset, making it perform exceptionally well, similar to Unsupervised settings. Secondly, Whisper is trained with relatively clean labels, allowing it to be used in a Zero-Shot manner without the need for extensive finetuning. 2.6.2 Architecture The architecture of Whisper consists of the original Transformer architecture (Vaswani et al., 2017) preceded by dimension reduction layer called stem. The architecture is visually depicted in Figure 1. Stem The stem comprises a pair of 1-dimensional Convolution Layers, each accompanied by GELU activations. Both convolution layers employ filters of size 3 and produce d output channels. The value of d varies across different sizes of the Whisper architectures. The first convolution layer operates with a stride of 1, while the second employs a stride of 2 (effectively reducing the length of the input sequence by half). Consequently, the output of the stem consists of a sequence of 1500 elements, each with dimension d. As the self-attention layers in a Transformer exhibit quadratic complexity concerning the sequence length, for a fixed hidden representation size of d, the stem significantly reduces the computational complexity by a factor of 4. Transformer In their work, Radford et al. (2023) primarily highlights the impact of scaled Weak Supervision on ASR system performance, with less emphasis on architectural modifications. The base architecture employed for Whisper is the encoder-decoder Trans\f10 BACKGROUND AND RELATED WORK former, which is renowned for its scalability and reliability in several sequence-tosequence tasks. However, the Whisper Transformer does introduce a few key modifications compared to the original Transformer architecture. Sinusoidal encodings are added to the input representations of the encoder, while the positional encodings in the decoder are learned. Additionally, GELU activation functions are used instead of ReLU, and these activations are applied following the residual blocks. Moreover, a normalization layer is included in the encoder\u2019s output. Furthermore, Whisper offers a range of five different architecture sizes, as detailed in Table 1. These varying sizes cater to different requirements and performance needs, allowing for flexibility in ASR tasks. Model Layer (L) Width (d) Parameters Tiny 4 384 39M Base 6 512 74M Small 12 768 244M Medium 24 1024 769M Large 32 1280 1550M Table 1 Architectural specifications for the Whisper model family. L denotes the number of layers per block, indicating that, for example, the tiny model with L = 4 consists of 4 transformer layers in the encoder and 4 in the decoder. Tokenization To tokenize transcripts, the Whisper model employs the BPE (Byte Pair Encoding) tokenizer originally introduced in GPT-2 by Radford et al. (2019). When dealing with languages other than English, the tokenizer is adapted by refining it until the vocabulary size matches that of English. 2.6.3 Multitasking Whisper is trained and operates as a multitask model, capable of handling various sub-tasks within a single end-to-end architecture. These sub-tasks encompass Voice Activity Detection, Language Identification, Text Alignment, Transcription, Translation, and more. To delineate each task and the expected format of the subsequent predictions, specific tokens are employed, as delineated in Table 2. These tokens are positioned at the start of the output sequence, providing task context (see Figure 1). Token generation follows an auto-regressive process, reliant on prior tokens. For ex\fBACKGROUND AND RELATED WORK 11 ample, when the detected language is French, the model computes the likelihood of token w at position k\u2032, as illustrated in Equation 2.4: P(wk\u2032 = w| . . . , <|fr|>, |transcribe|, . . . , wk\u2032\u22121, X) (2.4) Consequently, the generated tokens will probably only belong to the French vocabulary as they have higher conditional probabilities compared to ones belonging to other languages. Tasks Tokens Language Identification <|LANGUAGE|> e.g. <|en|>, <|gl|>, <|fr|>, <|fa|>, etc. Voice Activity Detection <|nospeech|> Transcribe <|transcribe|> Translate <|translate|> Alignment <|notimestamps|> Table 2 Subset of special tokens associated with Whisper\u2019s multitasks. For Language Identification, each language is specified with a token, and a single token is added to the sequence. This token is required. For Voice Activity Detection, only when the audio does not contain clear speech that its corresponding token is present in the output. The tasks Transcribe and Translate are mutually exclusive, but one of them is required. Additionally, certain special tokens can be predefined to simplify predictions. In our work, we specifically enforce transcription and language tokens, thereby eliminating dependency on Language Identification quality for under-represented languages. Tasks not pertinent to our study are also disregarded. \f12 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS 3 Bias Analysis on Quantized Speech Models In this chapter, we aim at addressing the two first objective of the internship: understand the bias presented on Whisper models, and investigate how these are impacted by the employment of quantization. 3.1 Experimental Setup 3.1.1 Dataset preparation In our research, we employed the two widely recognized datasets described in Section 2.5: FLEURS and Common Voice 13.0 (CV-13). These datasets provide valuable speaker-related information, including gender, language group (in the case of FLEURS), accent (exclusive to CV-13), and age (exclusive to CV-13). Building upon the information available in FLEURS, we curated a gender-balanced benchmark, which we refer to as Balanced-FLEURS. The primary goal here was to mitigate the influence of confusion variables such as sentence complexity and gender imbalance (where certain languages exhibit a higher percentage of speakers from one gender). To achieve this, we mixture the train, validation, and test sets of FLEURS, meticulously filtering them to ensure that each sentence was narrated by both a male and a female speaker. Meanwhile, we also ran a Voice Activity Detection model on the dataset, as we encountered a notable number of empty audio files in Spanish, Norwegian, and Malay1. We include in the experiments only the languages in which we were able to find at least 200 utterances. In addition to Balanced-FLEURS, we made use of the Common Voice 13.0 dataset, specifically its validation set. In this case, we leveraged gender and age information. While we attempted to incorporate accent information in our study as well, we encountered challenges in aggregating a sufficiently large dataset, even after merging the train, test, and validation splits. Consequently, we do not report our results with respect to accents. 3.1.2 Resourcefulness categorization In the course of our experiments, we have introduced a resourcefulness classification system specifically tailored to weakly-supervised speech models, with a primary focus 1 We have reported this issue to the Google Team via HuggingFace, listing all problematic files. The corresponding issue can be found here: https://huggingface.co/datasets/google/fleurs/discussions/16#6442a217f8b647fa4f50c489 \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 13 on the transcription task (ASR). This categorization is designed to group languages based on the amount of training data used in the model pre-training. The classification involves clustering languages into categories with similar amounts of training data, and the intervals used for this classification can be found in Table 3. Resourcefulness ASR Training data (h) Super High-Resource \u22655000 High-resource [1000, 5000) Mid-to-high-resource [500, 1000) Low-to-mid-resource [100, 500) Low-resource [10, 100) Extremely Low-Resource (0, 10) Table 3 Proposed Language resourcefulness categorization for Weakly-supervised ASR models It is worth noting that our proposed classification system has a limitation in the context of Whisper. Specifically, it does not account the volume of training data available for the speech translation task. While this data does not directly impact the quality of generated text data for a language (since in Whisper, translation data available is to English only), it does play a role in enhancing the model\u2019s speech encoding capabilities. 3.2 Bias evaluation on Whisper In this section, we present preliminary experiments conducted on the Whisper model. Our aim here is to investigate whether bias exists in the original versions of Whisper. To achieve this, we compare Whisper\u2019s performance on the validation split of CV-13 and on Balanced-FLEURS. Our analysis involves an aggregate approach, where we average the metrics across languages. Figures 2 (Balanced-FLEURS) and 3 (CV-13) showcase the Word Error Rate (WER) performance across the languages covered in the two datasets for whisper-large-v2. These results reveal a clear correlation between performance and resourcefulness, with lower resource languages (Low and Extremely Low-Resource) consistently exhibiting poorest performance. Naturally, the impact varies among languages, possibly due to their complexity or the amount of training data available for closely-related languages. These findings collectively suggest a bias linked to resourcefulness. \f14 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 2 Performance across languages on whisper-large-v2 on Balanced-FLEURS. Languages are ranked on x-axis based its performance. Figure 4 illustrates the average relative difference between male and female speakers for Balanced-FLEURS on whisper-large-v2. This metric, already employed is previous similar study by Boito et al. (2022), is relevant here as the sentences are consistently the same across genders. Meanwhile, Figure 5 displays the absolute difference (following Costa-juss\u00e0 et al. (2022)) in WER between male and female speakers on CV-13. In both cases, the results show varying degrees of gender bias across different languages. Remarkably, these biases are consistent across the different datasets, implying that each language possesses its unique bias, likely attributed to the quality and diversity of its training data. While the model does exhibit gender bias, it is essential to note that, for the most part, this bias remains within a maximum average WER difference of 3 for the majority of languages (in the case of CV-13). Figure 6 extends the analysis by presenting WER performance across different languages on Balanced-FLEURS, mirroring Figure 2. However, this time, we consider all available model sizes within the Whisper family. Languages are ranked by resourcefulness. These results unveil two significant findings: (i) the performance trend aligns across nearly all languages, suggesting a consistent ranking of languages based on performance across all models; and (ii) notably, a clear correlation emerges between smaller model sizes and reduced performance, with the model curves closely overlapping. This phenomenon likely stems from the curse of multilinguality, wherein less resourceful languages exhibit larger performance disparities among model sizes. Addi\fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 15 Figure 3 Performance across languages on whisper-large-v2 on CV-13. Languages are ranked on x-axis based its performance. Figure 4 Average relative WER difference between male and female voice for Balanced-FLEURS. Languages are ranked on x-axis based its relative difference and resourcefulness. tionally, it\u2019s worth noting the differences between large and large-v2 models. Although both models share the same size, the former benefits from more extensive training, additional optimization steps, and data augmentation techniques. Finally, these findings collectively shed light on bias associated with architecture size, despite models being trained with the same dataset. \f16 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 5 Absolute WER difference between male and female voice for CV-13. Languages are ranked on x-axis based its absolute difference. Figure 6 Performance across languages and across different whisper sizes on Balanced-FLEURS. Languages are ranked on x-axis based its resourcefulness. 3.3 Bias evaluation on quantized Whisper Now, we delve into the quantized version of Whisper. In this set of experiments, we apply the LLM.int8() method (Dettmers et al., 2022) (described in Section 2.3) to Whisper. The primary objective of this study is to investigate whether the biases observed in the original Whisper model persist, diminish, or intensify after quantization. In essence, we seek to understand what model features may be forgotten due to quantization. In contrast to the previous section, our analysis here adopts a sentence-level approach. We compare the model\u2019s performance on individual sentences before and after quantization. To ensure a fair evaluation, we exclude sentences with initial Word Error Rate (WER) values greater than or equal to 100. For this sentence-level analysis, we create histograms based on the absolute difference in WER before and after compression. We categorize sentences into three groups: those that worsened (WER increased by \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 17 more than 5), those that remained similar (WER difference less than 5), and those that improved (WER reduced by more than 5). Figure 7 Histogram of performance degradation by quantization per gender on Balanced-FLEURS Figure 8 Histogram of performance degradation by quantization per gender on CV13 Figures 7 (Balanced-FLEURS) and 8 (CV-13) present histograms categorized by gender for the whisper-large-v2 model. Figure 3 displays histograms categorized by age group for CV-13. The data clearly indicates that quantization equally impacts all genders and age groups, implying that gender and age biases are kept unchanged after quantization. In figures 10 (Balanced-FLEURS) and 11 (CV-13), we illustrate histograms categorized by language resourcefulness for whisper-large-v2. Here, a distinct pattern emerges: lower-resource languages are more significantly affected by quantization. While almost all sentences in super high-resource languages maintain their performance, approximately 25% of sentences in extremely low-resource languages are impacted (in the case of Balanced-FLEURS). Consequently, quantization amplifies the resourcefulness bias. \f18 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 9 Histogram of performance degradation by quantization per age group on CV-13 Figure 10 Histogram of performance degradation by quantization per resourcefulness group on Balanced-FLEURS Lastly, in figure 12 (Balanced-FLEURS) and ?? (CV-13), we present histograms considering all available model sizes within the Whisper family, grouped by model size. The results highlight significant differences in how quantization affects models of varying sizes. While a small proportion of sentences are impacted for whisper-large-v2, there is a striking contrast, with almost half of the sentences affected in the case of whisper-tiny. This highlights that the bias related to architecture size is significantly amplified by quantization. This last finding indicates that smaller models are generally more susceptible to the effects of quantization. This observation is particularly concerning as many parameterefficient domain adaptation methods in use today in NLP and Speech involve applying quantization first, without considering the model size. This calls for practitioners to \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 19 Figure 11 Histogram of performance degradation by quantization per resourcefulness group on CV-13 Figure 12 Histogram of performance degradation by quantization per model size on Balanced-FLEURS exercise caution when adapting pre-trained models to avoid the addition of unintended bias. \f20 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS 3.4 Summary of the main findings Here we present the key takeaways from this chapter. First, Whisper exhibits certain speaker-related biases, such as gender and age. These biases are kept unchanged after applying quantization to the model. On the other hand, biases associated with the model itself (model-related bias), including language resourcefulness and architecture size, are amplified by quantization. Overall, Low-resource languages are the most adversely affected by quantization. Moreover, there is a clear pattern on the architecture size, with smaller models experiencing more significant performance degradation compared to larger ones. This is concerning as current parameter-efficient approaches (such as QLoRA presented on Section 2.3) mostly apply quantization first, regardless of the model size. This presents a significant challenge: Can we enhance the performance of smaller models for languages where they currently perform poorly, even though the best model performs well? We aim at searching an alternative to quantization to reduce the model size. \fDISTILWHISPER 21 4 DistilWhisper One prominent observation is the significant Automatic Speech Recognition (ASR) performance gap between the whisper-large-v2 model and its counterparts of smaller sizes, especially when applied to a diverse set of languages. This gap in performance is noticeable across a wide spectrum of languages, that include the low-resource ones, but also many midand high-resource languages. As our earlier analysis, outlined in Chapter 3, revealed, the \"lower\" resource languages are also the most affected by lightweight compression techniques. This phenomenon is often referred to as the curse of multilinguality (as discussed in related works by Arivazhagan et al. (2019), Conneau et al. (2020), and Goyal et al. (2021)). It stems from the inherent challenge that arises when attempting to cover an extensive array of languages within a single model the performance inevitably suffers unless the model is significantly scaled up. This leads us to the central question that has motivated our research: Can we improve the performance of smaller models for languages in which they currently perform poorly, but the best model performs well? A common approach to address this challenge of achieving efficient inference could be distilling knowledge from a larger multilingual teacher model into a smaller pre-existing one, as highlighted in prior works such as the ones done by Sanh et al. (2020) and Mohammadshahi et al. (2022). However, when it comes to applying such knowledge distillation (KD) to whisper-large-v2, which represents the best and largest Whisper model, we face a significant hurdle. This is because we need access to information that is not readily available, such as comprehensive training data spanning all tasks and languages, and its original learning objective, in order to maintain the original model\u2019s robustness. Recent research findings, exemplified by works like Pfeiffer et al. (2022) and Pratap et al. (2023), have demonstrated an alternative solution to the curse of multilinguality. This approach involves equipping moderately sized models with language-specific (LS) modules. This sparse architectural design permits the extension of model parameters through additional modules as more languages are incorporated into the model. Consequently, it ensures consistent performance across languages without incurring substantial additional computational costs during inference. In light of the overarching goal to enhance model performance for various languages within the constraints of limited model capacity, our work introduces the DistilWhisper approach. We incorporate conditional language-specific routing (CLSR) modules, as described by B. Zhang et al. (2021), into a smaller version of Whisper. We then opti\f22 DISTILWHISPER Decoder CLSR Layer Cross-Attention Self-Attention Encoder CLSR Layer Self-Attention whisper-large-v2 LKD LCLSR whisper-small + CLSR Fine-tuning dataset LK x12 Fine-tuned\u00a0 Language-specific Layers Shared all ca cs uk ... g g x12 Frozen\u00a0 g Figure 13 The DistilWhisper optimization approach (left), and its architecture (right). The feed-forward is replaced by a CLSR module, where the LS gates (g) learn to alternate between the pre-trained frozen multilingual representation and the LS layer. mize these modules jointly through ASR fine-tuning and knowledge distillation from a larger Whisper model (whisper-large-v2). For a visual representation of our architecture, please refer to Figure 13, and in the subsequent sections, we delve into the key components of our approach. Following, in this chapter, we detail the elements that make up our approach. Then, in the next chapter (Chapter 5), we will present how we validate this approach and its results following the DistilWhisper approach presented here. 4.1 Conditional Language-Specific Routing We extend Conditional Language-Specific Routing (CLSR) modules proposed by B. Zhang et al. (2021), and commonly used in Multilingual Neural Machine Translation, for the first time to the speech domain. This module, which introduces sparsity to the Transformer architecture, learns a hard binary gate g(\u00b7) for each input token by using its hidden embedding zl. These decisions enable a layer to selectively guide information through either a LS path denoted as hlang or a shared path referred to as hshared, as in Eq. 4.1: CLSR(zl) = g(zl) \u00b7 hlang(zl) + (1 \u2212g(zl)) \u00b7 hshared(zl). (4.1) In contrast to the original CLSR, in this work we use language-specific gates as shown in Figure 13, instead of sharing them across languages. This allows us to train languagespecific components individually (i.e. in parallel), and then only load the relevant modules at inference. Moreover, our approach also differs from the original CLSR by the positioning: supported by previous work (Pfeiffer et al., 2022; B. Zhang et al., 2021), we limit CLSR to the feed-forward network (correspondent to the feature domain of the Transformer architecture), which we also replace entirely by the CLSR module, reducing the increment in the number of parameters. \fDISTILWHISPER 23 Following the proposal from B. Zhang et al. (2021), each gate g(.) is made by a twolayer bottleneck network, which is summed to a increasing zero-mean Gaussian noise during training to discretize it: g(zl) = \u03c3(G(zl) + \u03b1(t) \u00b7 N(0, 1)), (4.2) with G(zl) = ReLU(zlW1 + w2), (4.3) where \u03c3(\u00b7) is the logistic-sigmoid function, and W1 and w2 are trainable parameters. \u03b1 is linearly increased along with training steps t. At inference time, we adopt hard gating: g(zl) = \u03b4(G(zl) \u22650), (4.4) where \u03b4(\u00b7) is a Dirac measure. 4.2 DistilWhisper approach Figure 13 presents our proposed DistilWhisper architecture. Our student is enriched with CLSR modules at each feed-forward for each language. These all experts in each CLSR layer are equally initialized from the frozen weights of the corresponding feed-forward layer. At training time, for each language the model updates only the corresponding language-specific experts and gates. At inference time, the model loads the shared layers (multilingual) and the Language-Specific experts and gates for the languages of interest, resulting in a limited parameter overhead. We highlight that the use of CLSR modules brings more flexibility to our architecture when compared to adapters, as it allows for routing at the token-level. This makes this approach more capable of leveraging pre-existing knowledge (shared frozen module), activating the Language-Specific path only when this is likely to increase performance. 4.3 DistilWhisper optimization The optimization of our DistilWhisper architecture consist of a standard cross-entropy loss, along with two new elements: gate budget loss, and knowledge distillation. Following we detail these new elements. 4.3.1 Gate budget loss Following B. Zhang et al. (2021), when learning CLSR module parameters, in addition to standard cross-entropy loss LCE, we optimize a gate budget loss Lg to balance \f24 DISTILWHISPER models\u2019 usage of language-specific and shared modules. It relies on the gate g(.) activation values for a pair (audio, text) (X, Y ) in a batch B, which is expressed by: G(X,Y ) = X x\u2208X X m\u2208Menc gm(x) + X y\u2208Y X m\u2208Mdec gm(y) (4.5) where Menc and Mdec are respectively the sets of encoders and decoders layers, and gm(.) = 1 when LS expert is selected in the layer m, or gm(.) = 0 otherwise. The average of this gate usage, representing the amount of language-specific experts used for the model in the batch, is constrained to a budget b. So the final gate budget loss is expressed by: Lg = \f \f \f \f \f P (X,Y )\u2208B G(X,Y ) P (X,Y )\u2208B(|X||Menc| + |Y ||Mdec|) \u2212b \f \f \f \f \f (4.6) For regularization, also it is used a skip gate probability (s), that randomly choose a proportion s of the gates to be closed (use only shared part) during training. 4.3.2 Knowledge Distillation For Knowledge Distillation (KD), following recent research (Go et al., 2023; Wen et al., 2023), we employ Jensen\u2013Shannon divergence (JS), whose loss is detailed in Eq 4.7: LKD = 1 2EY\u223cp h log p(Y) m(Y) i + 1 2EY\u2032\u223cq\u03b8 h log q\u03b8(Y\u2032) m(Y\u2032) i (4.7) where p is the teacher distribution, q\u03b8 is the student distribution, Y and Y\u2032 are sampled from the teacher\u2019s and student\u2019s distributions and compared with their average m(\u00b7) = 1 2p(\u00b7) + 1 2q\u03b8(\u00b7). 4.3.3 Final Learning Objective The final learning objective the leverages the dataset labels using cross-entropy loss LCE, but also enforces the use of a specific budget via gate budget loss Lg and mirrors the behavior of the teacher with the knowledge distillation loss LKD.Thus, CLSR modules parameters are learned to minimize final loss expressed as: L = LCE + Lg + \u03b2LKD (4.8) where \u03b2 is a constant defined based on the quality of the teacher, but can also be scheduled or learned (with the add of new constraints for its magnitude). \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 25 5 Experiments and Results on DistilWhisper In the former chapter we presented the DistilWhisper approach. In this chapter we present how we validate our architecture and the method as a whole, showing that our approach is able to outperform both classical fine-tuning and adapters on whisper-small, providing better generalization through light-weight ASR fine-tuning and Knowledge Distillation of the teacher model. Code and models produced in this studied will soon be made available on Hugging Face and Github. 5.1 Experimental Setup In this section we overview our validation setup, that includes choosing the data we use for training and evaluating models, as well as which languages and baselines to consider. We also discuss some code implementation details. 5.1.1 Datasets In order to validate the proposed architecture, we make use of a sample of two widely used massively-multilingual datasets: CommonVoice 13.0 and FLEURS. More details about these datasets are presented on Section 2.5. In our experiments, we applied downsampling to both the train and validation sets of CV-13, ensuring an equal allocation of training data for each selected language in each experiment. For our primary experiment, we employed 10,000 utterances for training (approximately 14 hours of audio data) and 1,000 for validation. Additionally, we explored variations in dataset size, using downsampled sets of 3,000 and 28,000 utterances in scalability experiments. The selection of data for downsampling was guided by the number of up-votes received by annotators. Notably, we did not apply downsampling to the test set. For most part of our experiments, FLEURS serves as an invaluable resource for conducting out-of-domain evaluations. It offers a favorable degree of language overlap with the CommonVoice 13.0 dataset (CV-13), making it a suitable choice for comparative analysis. Notably, FLEURS provides an effective out-of-domain setting in the context of ASR evaluation. For instance, while the average number of tokens per sample in CV-13 is 36, FLEURS exhibits a substantially higher average of 97 tokens per sample. \f26 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.1.2 Language Selection In this work we focus on bridging the performance gap for a subset of under-performing languages of the whisper-small model through light-weight ASR fine-tuning and Knowledge Distillation of the whisper-large-v2 model, as proposed in chapter 4. For validating our method, we consider all Whisper languages with a WER gap of more than 11 between large and small models on CV-13. For our validation experiments we then narrow this list considering: 1) minimum amount of 10k utterances; 2) an overlap with the FLEURS dataset for out-of-domain evaluation. For scalability experiments we loose the first requirement so we can include more diverse set of languages, considering a minimum amount of 3k utterances. We also experiment with the languages in a setting with 28k utterances. Resourcefulness ASR Train data (h) Languages per setting 3k 10k 28k High-resource [1000, 5000) ca, fi, id, pl ca, pl ca Mid-to-high-resource [500, 1000) uk, vi uk Low-to-mid-resource [100, 500) cs, hu, ro, th, ta cs, hu, th, ta ta, th Low-resource [10, 100) bg, hi, sk, sl Extremely Low-Resource (0, 10) gl gl Table 4 Languages used in the experiments for validation of DistilWhisper grouped by resourcefulness. The final list of languages is: Bulgarian (bg), Catalan (ca), Czech (cs), Finnish (fi), Galician (gl), Hindi (hi), Hungarian (hu), Indonesian (id), Polish (pl), Romanian (ro), Slovak (sk), Slovenian (sl), Tamil (ta), Thai (th), Ukranian (uk), and Vietnamease (vi).2 These languages belong to 7 distinct language sub-families and exhibit significant variation in terms of their representation within the Whisper training data. This variation extends from a substantial 4,300 hours for certain languages, such as Polish (pl), to a mere 9 hours for languages like Galician (gl). For a detailed overview of these languages and their distribution across the three dataset sizes (3k, 10k, 28k), categorized by their resourcefulness (following the classification proposed on Section 3.1.2), please refer to Table 4. Additionally, Table 5 organizes these languages into groups based on their respective sub-families. 2 Although Arabic would also qualify considering our criteria, we find that the dialect from FLEURS differs from the ones present on CV-13. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 27 Sub-families Languages per setting 3k 10k 28k Slavic (Indo-European) bg, cs, pl, sk, sl, uk cs, pl Romance (Indo-European) ca, gl, ro ca, gl ca Finno-Ugrian (Uralic) fi, hu hu Austroasiatic id, vi Dravidian ta ta ta Tai (Kra\u2013Dai) th th th Indo-Iranian (Indo-European) hi Table 5 Languages used in the experiments for validation of DistilWhisper grouped by language sub-families. 5.1.3 Models and Baselines In our evaluation, we assess our approach in comparison to several baseline models. These include the whisper-small model, serving as our pre-trained student and starting point, and the whisper-large-v2 model, acting as the teacher model, and ultimately, as the target goal. Additionally, we explore two fine-tuning (FT) approaches for the student model: standard fine-tuning, where all model weights are updated, and LoRA adaptation, which focuses on refining the feed-forward layer. Moreover, we delve into the effects of the Conditional Language-Specific Routing (CLSR) layer independently, without knowledge distillation (KD), referred to as CLSR-FT. This allows us to isolate the influence of KD from the impact of the CLSR layer on the model\u2019s overall robustness. 5.1.4 Implementation details We conducted our experiments using the Transformers library (Wolf et al., 2020) and leveraged the pre-trained weights of both whisper-small and whisper-large-v2 models, sourced from HuggingFace3 4. Unless where stated different, our training protocol consisted of ten epochs, utilizing a learning rate of 10\u22124 with linear decay, a one-epoch warm-up phase, a batch size of 16, and a label smoothing factor of 0.1. For LoRA adaptation, we tested two scenarios: 1) We first adopted the hyperparameters proposed by M. Wang et al. (2023), notably r = 32, which is the most commonly 3 https://huggingface.co/openai/ 4 https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013 \f28 EXPERIMENTS AND RESULTS ON DISTILWHISPER used for this type of adapters; 2) We increase the hidden dimension of the adapters to r = 64, so the size of the adapters are comparable to the Language-specific modules on DistilWhisper. In the case of training the CLSR, we set the gate budget (b) to 0.5 and the skip-gate probability (s) to 0.2. For knowledge distillation (KD), we employed the Jensen\u2013Shannon divergence (JS) with a temperature (\u03c4) of 1, unless when stated in contrary. This was weighted such that the learning objective (L) consisted of the cross-entropy loss (LCE), the gate loss (Lg), and twice the KD loss (2LKD): L = LCE + Lg + 2LKD. We reported the normalized Word Error Rate (WER) using the Whisper normalization method, with a slight modification to prevent the splitting of numbers and Latin-scripted text into individual characters in languages that do not employ space delimitation (e.g., Thai). Further details, including the modified normalization method, implementation scripts, and model weights, will soon be made available on GitHub and HuggingFace. Throughout our experiments, we selected the best-performing model based on its WER performance on the downsampled CV-13 validation set. 5.2 DistilWhisper versus other adaptation approaches Table 6 presents the results for our first experiment. The top portion presents whisper-large-v2 (upper bound) and whisper-small (lower bound) pre-trained scores, which should not be directly compared to the other adaptation techniques (middle and bottom), as these models were not trained on CV-13 (full out-of-domain setting). The middle portion presents standard fine-tuning (FT) and LoRA adaptation at the feed-forward layers (LoRA-FT). Our results are presented in the bottom: CLSR-FT corresponds to the setting without LKD, while DistilWhisper is the complete setting in which both CLSR and KD losses are leveraged. For whisper-small, we observe that both the standard fine-tuning method (FT) and the LoRA Adapters (LoRA-FT) approaches (middle portion of Table 6) demonstrate the capacity to enhance performance on the in-domain test set (CV-13). However, as anticipated, employing FT leads to a decline in performance on the out-of-domain test set, with an average increase of 1.6. This is likely attributed to catastrophic forgetting, resulting in a tendency to overly specialize in the specific domain. In contrast, LoRAFT represents a more lightweight adaptation technique that preserves the pre-trained representation. Remarkably, it exhibits improvements in performance on both the indomain (average decrease of 12.8) and out-of-domain (average decrease of 5.6) test sets when compared to whisper-small. Notably, experimenting with a larger hidden \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 29 Common voice 13.0 (in-domain for FT only) Model # params avg ca th ta hu cs pl gl uk whisper large-v2 1.5 B 14.9 16.9 9.3 17.3 18.6 14.5 8.1 19.0 15.6 whisper-small 244 M 31.4 30.1 20.3 30.1 45.5 38.6 18.8 35.7 32.3 +FT 244 M 22.0 19.0 10.9 17.3 30.4 29.2 21.4 19.3 28.8 +LoRA-FT (r=32) 256 M 18.6 15.7 9.2 15.3 30.5 25.0 15.4 12.8 24.8 +LoRA-FT (r=64) 267 M 18.6 15.5 9.2 15.5 30.6 25.2 15.4 13.0 24.6 +CLSR-FT 269 M 16.4 13.9 7.4 13.6 24.9 20.9 16.0 11.2 23.5 DistilWhisper 269 M 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 FLEURS (out-of-domain) Model # params avg ca th ta hu cs pl gl uk whisper large-v2 1.5 B 12.6 5.6 12.6 19.3 17.9 14.4 5.9 16.8 8.3 whisper-small 244 M 29.2 14.6 22.7 36.2 42.9 40.3 18.2 33.5 24.8 +FT 244 M 30.8 19.1 28.2 31.6 51.3 38.9 26.1 23.2 27.9 +LoRA-FT (r=32) 256 M 23.6 15.5 17.6 25.5 38.5 33.4 18.5 17.7 22.3 +LoRA-FT (r=64) 267 M 23.6 15.7 17.6 25.7 38.2 33.9 18.5 17.3 22.1 +CLSR-FT 269 M 23.6 15.5 15.7 23.2 37.6 31.2 22.9 16.9 25.9 DistilWhisper 269 M 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 Table 6 WER (\u2193) for the 10k setting with dataset averages (avg) for baselines (top), adaptation approaches (middle), and our method (bottom) for in-domain (CV-13, FT only) and out-of-domain (FLEURS, all) test sets.Best results for whisper-small in bold. dimension (r) for the LoRA adapters did not yield any perceptible improvement on the average. Our approach, DistilWhisper, yields notable enhancements in performance. When compared to whisper-small, it achieves a substantial improvement on in-domain data, with an average decrease of 15.3. This improvement is also evident when compared to LoRA-FT, where an average decrease of 2.2 is observed. Additionally, DistilWhisper exhibits superior adaptability in out-of-domain scenarios when contrasted with the original whisper-small, resulting in an average increase of 6.4. Furthermore, it demonstrates more effective out-of-domain adaptation capabilities in comparison to LoRA-FT, with an average increase of 0.8. We observe that both versions of our approach, with and without KD, successfully outperform all other adaptation approaches (FT, LoRAFT) for in-domain and out-of-domain in all languages but two (pl and uk) (bottom portion of Table 6). These findings highlights the robustness of our approach, showcasing that the proposed architecture with the addition of CLSR layers on Whisper provides a strong solution. Notably, all of these improvements are achieved with a mere 25 million parameter overhead during inference (10 % of the original model size). \f30 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.3 Impact of knowledge distillation In this analysis, we compare the two versions of our approach: one entails optimizing a lightweight CLSR-based architecture without Knowledge Distillation (CLSR-FT), while the other incorporates Knowledge Distillation loss (DistilWhisper). Across the examined languages, we observe some interesting trends. Firstly, when considering in-domain performance, as shown in Table 6, the DistilWhisper model exhibits a slightly increase in average performance of 0.3 on the WER. The performance is superior in all languages but Polish and Galician. However, when it comes to out-of-domain scenarios, DistilWhisper consistently outperforms CLSRFT across all languages, resulting in an average improvement of 0.8 on the WER. This observation confirms our initial hypothesis that the inclusion of Knowledge Distillation leverages the robustness imparted by the teacher model, preventing overspecialization in the CV-13 domain. Collectively, these results underscore the effectiveness of our proposed architecture. Notably, we managed to bridge the out-of-domain performance gap between large-v2 and small by a substantial 39%, reducing it from 16.6 to 10.2 (average decrease of 6.5). All of this was achieved with only a modest 10% parameter overhead during inference (25 million parameters). 5.4 DistilWhisper Scalability In the previous sections we showed that our architecture improves scores for both indomain and out-of-domain datasets, compared to other adaptation approaches. In this section we investigate the effectiveness of our method with respect to the amount of data available for training. For this, we select a subset of languages for which we find more training data available on CV-13 (ca, th, ta). Table 7 presents results for our approach in lower-resource training settings (3k utterances; approx. 4 hours), and higher-resource settings (28k utterances; approx. 40 hours). 10k results as well as the results for whisper-large-v2 and whisper-small are repeated from Table 6. We observe that, as expected, increasing the amount of trainable examples leads to superior ASR performance for both approaches, with the leveraging of KD (DistilWhisper) being consistently superior to CLSR-FT and getting closer to close the out-of domain performance gap. For the 28k setup, we are able to reduce the out-of-domain WER gap between whisper-large-v2 and whisper-small by 75.8%, from 12.0 to 2.9. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 31 FLEURS CV-13 Train FLEURS CV-13 (out-of-domain) (in-domain) size avg avg ca ta th ca ta th whisper large-v2 12.5 14.5 5.6 19.3 12.6 16.9 17.3 9.3 whisper-small 24.5 26.8 14.6 36.2 22.7 30.1 30.1 20.3 +LoRA-FT (r=64) 3k 22.7 17.0 17.7 28.6 21.7 19.4 19.0 12.5 +CLSR-FT 3k 20.4 15.2 17.8 25.4 17.9 19.2 16.7 9.7 DistilWhisper 3k 20.2 14.8 17.2 25.7 17.6 18.9 15.9 9.6 +LoRA-FT (r=64) 10k 19.7 13.4 15.7 25.7 17.6 15.5 15.5 9.2 +CLSR-FT 10k 18.1 11.6 15.5 23.2 15.7 13.9 13.6 7.4 DistilWhisper 10k 17.4 11.2 15.4 21.6 15.1 13.8 12.5 7.2 +LoRA-FT (r=64) 28k 17.2 11.1 13.6 23.0 15.1 12.5 13.5 7.3 +CLSR-FT 28k 15.6 9.7 13.5 19.6 13.8 11.5 11.3 6.2 DistilWhisper 28k 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 Table 7 WER (\u2193) for different training data sizes (3k, 10k, and 28k utterances) for both in-domain (CV-13) and out-of-domain (FLEURS) test sets. Best results in bold. Furthermore, our approach demonstrates commendable robustness in relation to the quantity of trainable examples. Even with as few as 3,000 utterances (equivalent to 4 hours of training data), we are able to reduce the WER performance gap by 35.8% in out-of-domain data. This suggests that our method holds promise in enhancing ASR performance for low-resource languages, where training data availability is limited. Across all three settings, our approaches consistently outperform LoRA Adapters by a significant margin. Additionally, it is worth noting that, in nearly all cases within these settings, the inclusion of knowledge distillation proved more beneficial than fine-tuning alone, reinforcing the findings discussed in Section 5.3. 5.5 Gate Activation Analysis To better understand how the model uses routing mechanism, we analyze gate activation statistics on the experiment discussed on Section 5.4 for both CLSR-FT and DistilWhisper. This results are presented on Figure 14. Firstly, we observe a tendency for the models to rely more heavily on the newly introduced Language-Specific modules in out-of-domain scenarios. This could be attributed to the greater complexity and larger sentence sizes prevalent in the FLEURS dataset. \f32 EXPERIMENTS AND RESULTS ON DISTILWHISPER 3k 10k 28k 30 40 50 LS Activation (%) Catalan 3k 10k 28k Thai FLEURS CV-13 3k 10k 28k T amil CLSR-FT DistilWisper Figure 14 Ratio of LS layers chosen by the models (CLSR-FT and DistilWhisper) depending on (1) amount of training data; (2) in (CV-13) or out-of-domain (FLEURS); (3) language. Also, as expected, enlarging the training dataset consistently results in more reliable Language-Specific modules, leading to increased utilization of these modules. The only exception for this is Thai at the 28k setup with CLSR-FT, and this might be due to dataset quality and requires further investigation The comparison of the three languages reveals that Catalan displays a notably higher reliance on Language-Specific routes. This characteristic might be linked to the superior data quality available for Catalan in CV-13, where a substantial number of contributors have contributed to the dataset. Also, the distilled version uses more LS modules, probably because the teacher whisper-large-v2 is a really good model for this language. Now for languages with a weaker teacher (Thai, Tamil) we observe that the model may receive contradictory signals at lower-resource settings (3k, 10k), leading to less Language-Specific routing usage with Knowledge Distilation. However, in the higher resource setting (28k), KD usage leads systematically to more reliable LanguageSpecific module and therefore higher LS routing. Finally, we observe a common trend across the three languages models tend to employ more Language-Specific routes when learning with Knowledge Distillation (DistilWhisper vs. CLSR-FT). This suggests that KD imparts valuable information and enhances the out-of-domain generalization capabilities of the learned Language-Specific representation. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 33 Common voice 13.0 (in-domain for FT only) Model avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 17.0 19.9 16.9 14.5 14.4 19.0 24.6 18.6 8.5 8.1 15.8 31.9 20.6 17.3 9.3 15.6 whisper-small 34.2 44.8 30.1 38.6 30.5 35.7 43.6 45.5 22.5 18.8 33.2 42.0 45.5 30.1 20.3 32.3 +CLSR-FT 22.9 26.1 19.2 25.7 25.1 15.3 18.8 31.6 19.2 18.3 23.4 36.6 28.6 16.7 9.7 29.5 DistilWhisper 22.6 25.9 18.9 26.2 24.8 14.7 18.3 31.0 18.6 18.6 21.5 36.8 27.7 15.9 9.6 30.0 FLEURS (out-of-domain) Model avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 13.7 14.6 5.6 14.4 9.7 16.8 23.8 17.9 7.1 5.9 14.4 11.7 23.1 19.3 12.6 8.3 whisper-small 32.8 39.9 14.6 40.3 26.8 33.5 47.9 42.9 18.6 18.2 34.6 35.8 54.5 36.2 22.7 24.8 +CLSR-FT 29.2 43.8 17.8 35.4 33.7 19.8 22.8 40.1 19.0 21.9 33.4 35.3 50.8 25.4 17.9 21.6 DistilWhisper 29.2 42.8 17.2 35.6 32.0 18.7 21.8 41.1 19.1 21.9 33.1 35.2 50.5 25.7 17.6 25.9 FLEURS (out-of-domain) CV-13 (in-domain) High Mid-to-high Low-to-mid Low Extremely Low High Mid-to-high Low-to-mid Low Extremely Low whisper large-v2 7.1 8.3 15.7 18.3 16.8 12.0 15.6 15.1 24.3 19.0 whisper-small 19.6 24.8 35.3 44.5 33.5 25.5 32.3 33.5 44.0 35.7 +CLSR-FT 23.1 21.6 30.4 38.2 19.8 20.4 29.5 21.4 27.5 15.3 DistilWhisper 22.5 25.9 30.6 37.6 18.7 20.2 30.0 20.8 27.2 14.7 Table 8 WER (\u2193) for the 3k setting with dataset averages (avg) for baselines (top), and our method (bottom) for in-domain (CV-13, FT only higher portion) and out-of-domain (FLEURS, all middle portion) test sets. On the lower portion, the same results are grouped by resourcefulness. Best results for whisper-small in bold. \f34 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.6 Considerations on the Resourcefulness Our observations so far indicate that both versions of our approach, with and without knowledge distillation (KD), demonstrate consistent outperformance over all other adaptation methods (FT and LoRA-FT). This improvement holds true for both in-domain and out-of-domain scenarios across all languages, with only two exceptions on the 10k setting (Polish and Ukrainian), as indicated in the lower portion of Table 6. The challenges encountered in these two languages can be attributed to their higher resource status, with Polish being a high-resource language and Ukrainian categorized as midto-high resource, as detailed in Table 4. In order to deepen this analysis, we conducted experiments across a broader range of languages, widening to those with a minimum of 3,000 utterances available for training. The outcomes of these experiments are presented in Table 8, where we have also aggregated the results into resourcefulness clusters (in the lower portion) based on the classification provided in Table 4. Examining the results, we observed that more substantial out-of-domain improvements are seem in languages with lower resource availability (Low-to-mid, Low and Extremely low-resource clusters). This aligns with the initial motivation behind our work, which aimed to address the curse of multilinguality. We expect that lower resource languages experience a more significant impact from this phenomenon during the pre-training of whisper-small. Consequently, they significantly benefit more from the integration of language-specific modules in the feature domain. In contrast, for languages with higher resource availability, further enhancements may be necessary, such as adjustments to attention weights (corresponding to the time domain). This is due to the fact that the original model already performs reasonably well. Additionally, achieving better out-of-domain performance may require a larger training dataset. This is exemplified by the case of Catalan presented in Table 7. In this case, CLSR modules yielded superior performance than original whisper-small only in the case trained with 28,000 utterances, losing to its starting point for 3,000 and 10,000 training utterances. 5.7 Effect of temperature and distillation loss In this set of experiments, our goal is to examine the impact of the chosen distillation optimization on the results. We start by exploring the effect of temperature. Temperature plays a crucial role in determining the learning behavior of the model. A lower temperature, like 1, tends to make the learning focus primarily on replicating the first \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 35 option from the teacher\u2019s logits for each token. Conversely, a higher temperature, such as 3 or 4, encourages the learning to take into account the other options, thereby mitigating the cost from incorrect predictions. However, this approach may lead to over-smoothing of the distribution and a reduced ability to effectively rank similar logits. Common voice 13.0 (in-domain) avg ca th ta hu cs pl gl uk JS w/ \u03c4 = 1 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 JS w/ \u03c4 = 3 16.3 14.1 7.5 13.1 23.5 21.1 16.2 11.6 23.6 FLEURS (out-of-domain) avg ca th ta hu cs pl gl uk JS w/ \u03c4 = 1 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 JS w/ \u03c4 = 3 23.4 17.0 15.6 21.5 36.0 31.4 22.4 16.8 26.2 Table 9 WER (\u2193) for the 10k setting with dataset averages (avg) for JS loss with temperatures 1 and 3, for in-domain (CV-13, FT only higher portion) and out-ofdomain (FLEURS, all lower portion) test sets. Best results in bold. Tables 9 and 10 present the results of comparing different temperatures (1 or 3) with the Jensen\u2013Shannon loss for both the 10k and 28k settings. These results reveal that using a temperature of 1 generally results in improved in-domain and out-of-domain performance compared to a temperature of 3. However, for Tamil and Hungarian, temperature 3 showed better out-of-domain performance. These results suggest that whisper-large-v2 serves as an effective teacher, justifying the use of a temperature of 1. Nevertheless, the optimal temperature value may vary depending on the quality of the teacher model for each specific language. FLEURS CV-13 FLEURS (out-of-domain) CV-13 (in-domain) avg avg ca ta th ca ta th JS w/ \u03c4 = 1 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 JS w/ \u03c4 = 3 16.3 9.7 14.8 20.1 14.1 11.8 11.3 5.9 KL w/ \u03c4 = 1 15.6 10.8 14.6 18.7 13.3 14.9 11.3 6.2 KL w/ \u03c4 = 3 16.5 9.7 15.8 19.8 14.0 12.2 11.1 5.9 Table 10 WER (\u2193) for different training data sizes (3k, 10k, and 28k utterances) for JS and KL losses for temperatures 1 and 3 for both in-domain (CV-13) and out-ofdomain (FLEURS) test sets. Best results in bold. \f36 EXPERIMENTS AND RESULTS ON DISTILWHISPER Table 10 also compares the use of the Jensen\u2013Shannon (JS) loss with the traditional Kullback\u2013Leibler (KL) loss discussed in Section 2.4, specifically for the 28k setting. Once again, the results favor a temperature of 1 in both cases, with a slight advantage for the JS loss against KL, primarily driven by Catalan out-of-domain performance. This advantage is more pronounced in in-domain performance. These findings indicate the presence of the mode-averaging problem introduced in Section 2.4, although they are not definitive. They raise questions about whether these behaviors change when working with larger or smaller fine-tuning datasets and different levels of language resourcefulness. Unfortunately, due to time constraints, we could not explore these aspects in this study, leaving them as potential directions for future research. 5.8 Multi-domain training In our final experiment, we delve into the impact of incorporating the train split of FLEURS dataset into our training data in the previously explored settings. The objective here is to use the validated architecture to generate models that would be more beneficial to the scientific community. In real-world scenarios, the models developed here are likely to be utilized in domains other than FLEURS or CV-13, so the hypothesis is that training on more than one dataset yields a better model. Common voice 13.0 Model Train data avg ca th ta hu cs pl gl uk whisper large-v2 14.9 16.9 9.3 17.3 18.6 14.5 8.1 19.0 15.6 whisper-small 31.4 30.1 20.3 30.1 45.5 38.6 18.8 35.7 32.3 DistilWhisper CV10k 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 +CLSR-FT CV10k + F 15.5 15.1 6.8 12.4 21.9 18.4 16.3 11.3 22.2 DistilWhisper CV10k + F 14.6 13.2 6.4 11.6 21.6 15.3 15.8 11.2 21.6 FLEURS Model Train data avg ca th ta hu cs pl gl uk whisper large-v2 12.6 5.6 12.6 19.3 17.9 14.4 5.9 16.8 8.3 whisper-small 29.2 14.6 22.7 36.2 42.9 40.3 18.2 33.5 24.8 DistilWhisper CV10k 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 +CLSR-FT CV10k + F 17.2 11.8 10.1 16.0 28.1 23.2 17.1 12.9 18.7 DistilWhisper CV10k + F 16.7 11.9 9.4 14.6 27.7 22.1 17.7 12.7 17.3 Table 11 WER (\u2193) for the setting trained with 10k from CV-13 and FLEURS with dataset averages (avg) for baselines (top), adaptation approaches (middle), and our method (bottom) CV-13 and FLEURS test sets (both in-domain). Best results for whisper-small in bold. Table 11 showcases the outcomes of training the model in a setting involving 10k sentences from CV-13 along with the entire FLEURS train split. In this setting, we once \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 37 again experiment with CLSR fine-tuning. For reference, the table also presents results from section 5.2. The results reaffirm better performance for the setting with Knowledge Distillation compared to CLSR-FT. More significantly, the results demonstrate a substantial improvement within the domain when FLEURS is incorporated as part of the training dataset. Training with FLEURS reduces the WER on CV-13 by 1.5. This improvement is likely due to FLEURS\u2019 greater sentence complexity and larger average token count per line, contributing to enhanced training data diversity. In table 12, we repeat the same experiment using settings with 3k and 28k sentences from CV-13, both added to the full FLEURS dataset. The results allow us to draw the same conclusions: the addition of out-of-domain training data (FLEURS) results in superior in-domain generalization on CV-13. Nevertheless, it is evident that the size of the training data remains a limiting factor, as CV3k+F (approximately 6k sentences) was insufficient to surpass CV10k alone, and similarly for CV10k+F (around 13k sentences) in comparison to CV28k alone. In this section, we have presented the best models attainable for each setting using these two datasets. These models will be made open-source, and we hope they contribute to the development of speech recognition applications in these languages. \f38 EXPERIMENTS AND RESULTS ON DISTILWHISPER Common voice 13.0 Train data avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 17.0 19.9 16.9 14.5 14.4 19.0 24.6 18.6 8.5 8.1 15.8 31.9 20.6 17.3 9.3 15.6 whisper-small 34.2 44.8 30.1 38.6 30.5 35.7 43.6 45.5 22.5 18.8 33.2 42.0 45.5 30.1 20.3 32.3 DistilWhisper CV3k 22.6 25.9 18.9 26.2 24.8 14.7 18.3 31.0 18.6 18.6 21.5 36.8 27.7 15.9 9.6 30.0 DistilWhisper CV3k + F 19.3 21.8 15.0 21.7 22.4 14.2 15.8 26.4 17.0 17.2 18.0 29.3 22.9 13.4 7.8 27.0 FLEURS Train data avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 13.7 14.6 5.6 14.4 9.7 16.8 23.8 17.9 7.1 5.9 14.4 11.7 23.1 19.3 12.6 8.3 whisper-small 32.8 39.9 14.6 40.3 26.8 33.5 47.9 42.9 18.6 18.2 34.6 35.8 54.5 36.2 22.7 24.8 DistilWhisper CV3k 29.2 42.8 17.2 35.6 32.0 18.7 21.8 41.1 19.1 21.9 33.1 35.2 50.5 25.7 17.6 25.9 DistilWhisper CV3k + F 18.3 21.0 12.3 24.2 19.7 13.9 13.5 29.0 13.0 16.6 21.4 19.4 27.5 15.1 10.3 18.1 Training FLEURS CV-13 FLEURS CV-13 Data avg avg ca ta th ca ta th whisper large-v2 12.5 14.5 5.6 19.3 12.6 16.9 17.3 9.3 whisper-small 24.5 26.8 14.6 36.2 22.7 30.1 30.1 20.3 DistilWhisper CV28k 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 DistilWhisper CV28k + F 11.4 9.0 10.8 14.2 9.4 10.9 10.5 5.6 Table 12 WER (\u2193) for the 3k setting with dataset averages (avg) for baselines (top), and our method (bottom) for in-domain (CV-13, FT only higher portion) and out-of-domain (FLEURS, all middle portion) test sets. On the lower portion, the same results are grouped by resourcefulness. Best results for whisper-small in bold. \fCONCLUSION 39 6 Conclusion This internship focused on investigating bias on Whisper, a family of large speech models, specifically examining speaker-related (gender, age, accent) and model-related (model size, resourcefulness, similar languages) biases. Additionally, we explored whether these biases are mitigated or exacerbated by quantization and proposed an alternative compression approach. Our findings revealed that Whisper exhibits both speaker-related and model-related biases. Speaker-related biases are kept unchanged after quantization, while modelrelated biases are amplified by this compression technique. Low-resource languages are particularly more affected, and smaller models experience significant performance degradation. This is concerning because current parameter-efficient approaches typically apply quantization uniformly across models, introducing unintended bias. To address this challenge, we introduced DistilWhisper, a parameter-efficient distillation approach that enhances the performance of whisper-small by transferring the robustness of whisper-large-v2 into a smaller model. This is achieved by incorporating language-specific gated modules and jointly optimizing ASR fine-tuning and knowledge distillation losses. Our results consistently showed performance improvements across various languages and test sets, with minimal parameter increase during inference. We believe this approach will democratize the use of Whisper models, making them accessible to a wider audience of researchers and practitioners. This approach was organized as a paper submitted to the conference ICASSP 2024 (Ferraz et al., 2024). Code and models produced in this study will be made available soon on Hugging Face and Github. 6.1 Future Work There are several promising directions for future research in this area. Firstly, it would be beneficial to expand upon the analysis presented in Chapter 3, including an investigation into other quantization methods, such as 4-bit quantization. Exploring these methods across various model families would help determine if the conclusions drawn here are applicable more broadly. This could present an important contribution to the community and ensure the correct usage of these techniques. Additionally, further research into the DistilWhisper approach could yield valuable insights. Examining the effects of several hyperparameters, such as gate budget, KD loss weight, and temperature, would provide a deeper understanding of the approach\u2019s \f40" + }, + { + "url": "http://arxiv.org/abs/2311.01070v3", + "title": "Multilingual DistilWhisper: Efficient Distillation of Multi-task Speech Models via Language-Specific Experts", + "abstract": "Whisper is a multitask and multilingual speech model covering 99 languages.\nIt yields commendable automatic speech recognition (ASR) results in a subset of\nits covered languages, but the model still underperforms on a non-negligible\nnumber of under-represented languages, a problem exacerbated in smaller model\nversions. In this work, we propose DistilWhisper, an approach able to bridge\nthe performance gap in ASR for these languages while retaining the advantages\nof multitask and multilingual capabilities. Our approach involves two key\nstrategies: lightweight modular ASR fine-tuning of whisper-small using\nlanguage-specific experts, and knowledge distillation from whisper-large-v2.\nThis dual approach allows us to effectively boost ASR performance while keeping\nthe robustness inherited from the multitask and multilingual pre-training.\nResults demonstrate that our approach is more effective than standard\nfine-tuning or LoRA adapters, boosting performance in the targeted languages\nfor both in- and out-of-domain test sets, while introducing only a negligible\nparameter overhead at inference.", + "authors": "Thomas Palmeira Ferraz, Marcely Zanon Boito, Caroline Brun, Vassilina Nikoulina", + "published": "2023-11-02", + "updated": "2024-03-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "main_content": "INTRODUCTION Whisper [1] is a popular multilingual and multitask speech model that is known for its robustness (i.e. invariant performance over different out-of-domain data) for automatic speech recognition (ASR) [2]. This model covers 99 languages, and jointly trains on ASR, speech translation (manyto-English), language identification, and voice activity detection tasks. The original paper points this multitask training as a reason for the observed robustness of the model to out-of-domain data: compared to the English wav2vec 2.0 model [3], Whisper performance seems to generalize better to unseen domains. Presented in many sizes (from tiny to large-v2), we note that there is an important gap in ASR performance between whisper-large-v2 (largest model) and whisper-small (second smallest model) on a large set of languages, including low-resource languages, but also many highand mid-resource ones. Such phenomenon in NLP is often referred as curse of multilinguality [4, 5, 6], where the performance drop due to the growing amount of covered languages can only be recovered via extensive model scaling. Such scaling comes with an important inference cost increase: for instance, whisper-large-v2 is 2-3 times slower than whisper-small. A common approach to efficient inference is distilling knowledge from a large multilingual teacher model into a smaller model [7, 8]. However, to apply such knowledge distillation (KD) to whisper-large-v2, the best and largest Whisper model, we would need to access unavailable information such as the training data across all the tasks and languages, in order to preserve the robustness of the original model. Recent works [9, 10] have demonstrated that the curse of multilinguality can also be solved by equipping a moderately sized model with language-specific (LS) modules. Such architecture allows to extend model parameters via extra modules when more languages are added into the model, thus maintaining consistent performance across languages, with no (or very low) extra computations at inference. Inspired by those findings we propose DistilWhisper, which extends whisper-small with LS feed-forward layers, that are used in parallel with the original feed-forward layers of the model. In order to preserve the robustness of the original model, DistilWhisper introduces the following extensions of previous works: (1) Following [11] we extend conditional language-specific routing (CLSR) modules with the gating mechanism that can route input representation either through the original feed-forward layer or through newly learned LS feed-forward layer; (2) When learning LS layers, we use whisper-large-v2 as a teacher model with the hypothesis that the KD loss should help reproducing the robustness of the larger Whisper model. Through extensive experiments on a diverse set of languages we demonstrate the effectiveness of DistilWhisper compared to standard fine-tuning or LoRA [12] adapters. Our lightweight ASR fine-tuning approach based on CLSR modules generalizes better than LoRA, and the introduction of KD further boosts results in both inand out-of-domain test sets. We perform additional ablation studies showing our arXiv:2311.01070v3 [cs.CL] 12 Mar 2024 \fapproach can cope with different amounts of training data. Finally, we demonstrate that the flexibility introduced by the gating mechanism equips DistilWhisper with an efficient adaptation approach, leveraging the LS modules only when those are relevant. We make available the models\u2019 weights1 and code2 developed in this work. 2. BACKGROUND State of the art for ASR: Current approaches for ASR mainly rely on the adaptation of pre-trained Transformer stacks learned through self-supervision (i.e. SSL models) on unlabeled audio data. Such pre-trained models vary on the usage of pretext tasks [3, 13, 14] and language coverage [15, 16, 10, 17]. In contrast to this branch of research, the Whisper model relies on weak supervision, which means that the architecture is trained on weakly labeled data only (no self-supervision). Nonetheless, they show that with sufficient amounts of data, the model reaches competitive results compared to mono [1, 2] and multilingual SSL models [10]. Knowledge distillation (KD) has been initially proposed by [18] to distill knowledge from ensemble of models into a single model for ASR. It has further been used to distill knowledge from a large teacher model into smaller student models [7, 8, 19]. While original KD methods relied on minimization of KL-divergence between a teacher model and a student model, recent research [20, 21] have shown that symmetric divergences, such as Jensen-Shannon (JS) divergence, suffer less from borderline behaviors and lead to better results on sequence level distillation. Adapters are small lightweight modules which are commonly used to adapt pre-trained models to new tasks or domains. In speech-related tasks, adapter-based fine-tuning has been utilized for speech translation [22, 23, 24], and domain adaptation [25, 26], for which they exhibit a similar performance to standard fine-tuning, but with only a fraction of trainable parameters. We also find work on task-adaptation of Whisper [27, 28, 29] using LoRA adapters. In contrast to adapters, in this work we introduce gated LS layers into Whisper, and propose a parameter-efficient KD approach that allows us to increase robustness to out-of-domain data. 3. DISTILWHISPER With the goal of increasing performance for different languages in models of limited capacity, we propose the DistilWhisper approach: we plug conditional language-specific routing (CLSR) modules [11] into a small Whisper (whispersmall), and optimize these modules jointly on ASR fine1Weights available at: https:// huggingface.co/collections/naver/ multilingual-distilwhisper-6576ecae8d209fc6a767d9e7. 2Code available at: https://github.com/naver/ multilingual-distilwhisper tuning and KD from a larger Whisper (whisper-largev2). Figure 1 presents our architecture, below we detail its key components. CLSR module. We extend CLSR modules for the first time to the speech domain. This module learns a hard binary gate g(\u00b7) for each input token by using its hidden embedding zl. These decisions enable a layer to selectively guide information through either a LS path denoted as hlang or a shared path referred to as hshared, as in Eq 1. In contrast to the original CLSR, in this work we use LS gates as shown in Figure 1, instead of sharing them across languages. This allows us to train LS components individually (i.e. in parallel), and then only load the relevant modules at inference. Moreover, our approach also differs from the original CLSR by the positioning: supported by previous work [11, 9], we limit CLSR to the feed-forward, which we also replace entirely by the CLSR module, reducing further the number of parameters. Gating follows [11]: each gate g(.) is made by a two-layer bottleneck network, which is summed to an increasing zeromean Gaussian noise during training in order to discretize it. At inference time, we adopt hard gating. DistilWhisper approach is detailed at Figure 1. Our student is enriched with CLSR modules at each feed-forward for each language. These CLSR layers are initialized from the frozen weights of the corresponding feed-forward layer. At training time, for each language the model updates only the corresponding LS layers and gates. At inference time, the model loads the shared layers (multilingual) and the LS modules and gates for the languages of interest, resulting in a limited parameter overhead. We highlight that the use of CLSR modules brings more flexibility to our architecture when compared to adapters, as it allows for routing at the token-level. This makes this approach more capable of leveraging pre-existing knowledge (shared frozen module) via LS gating activation. DistilWhisper optimization. Following [11], when learning CLSR module parameters, in addition to standard crossentropy loss LCE, we employ a gate budget loss Lg (Eq 2) to balance models\u2019 usage of LS and language-shared modules. It relies on the gate g(.) activation values for a pair (audio, text) (X, Y ) in a batch B, which is expressed by G(X,Y ) = \u2211x\u2208X \u2211m\u2208Menc gm(x)+\u2211y\u2208Y \u2211m\u2208Mdec gm(y) where Menc and Mdec are respectively the encoders and decoders layers, and gm(.) = 1 when LS layer is selected, or 0 otherwise. The average of this gate usage is constrained to a budget b (Eq 2). For KD, following recent research [20, 21], we use JS divergence, whose loss is detailed in Eq 3, where p is the teacher distribution, q\u03b8 is the student distribution, Y and Y\u2032 are sampled from the teacher\u2019s and student\u2019s distributions and compared with their average m(\u22c5) = 1 2p(\u22c5) + 1 2q\u03b8(\u22c5). Thus, CLSR modules parameters are learned to minimize final loss expressed as L = LCE + Lg + \u03b1LKD. CLSR(zl) = g(zl)\u22c5hlang(zl)+(1\u2212g(zl))\u22c5hshared(zl). (1) \fwhisper-large-v2 LKD LCE+\u00a0Lg whisper-small + CLSR Fine-tuning dataset LK Fine-tuned\u00a0 \u00a0 \u00a0 \u00a0 \u00a0Frozen\u00a0 Language-specific Layers Shared all ca cs uk ... g g g x12 Encoder CLSR Layer Self-Attention x12 Decoder CLSR Layer Cross-Attention Self-Attention Fig. 1. The DistilWhisper optimization approach (left), and its architecture (right). The feed-forward is replaced by a CLSR module, where the LS gates (g) learn to alternate between the pre-trained frozen multilingual representation and the LS layer. Lg = \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb \u2211(X,Y )\u2208B G(X,Y ) \u2211(X,Y )\u2208B(\u2223X\u2223\u2223Menc\u2223+ \u2223Y \u2223\u2223Mdec\u2223) \u2212b \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb \u00bb (2) LKD = 1 2EY\u223cp [log p(Y) m(Y)] + 1 2EY\u2032\u223cq\u03b8 [log q\u03b8(Y\u2032) m(Y\u2032) ] (3) 4. EXPERIMENTAL SETUP Datasets: We downsample the train and validation sets of the CommonVoice 13.0 (CV-13) dataset [30], using equal amounts of training data for each selected language: 10k utterances for training (approx. 14 h), 1k for validation. Data selection depends on the amount of up-votes utterances received by annotators. We do not downsample the test set. The FLEURS [31] dataset is used for out-of-domain evaluation, as it provides both a good language overlap with CV-13, and an effective out-of-domain setting for ASR evaluation. For instance, average number of tokens per sample for CV-13 is 36, and 97 for FLEURS. Language Selection: We consider all Whisper languages with a WER gap of more than 11 between large and small models on CV-13. We then narrow this list considering: 1) minimum amount of utterances (10k); 2) overlap with the FLEURS dataset. The final list of languages is: Catalan (ca), Czech (cs), Galician (gl), Hungarian (hu), Polish (pl), Thai (th), Tamil (ta) and Ukranian (uk).3 These languages encompass 5 language sub-families and vary widely in terms of coverage in the Whisper training set, spanning from 4,300 h (pl) to just 9 h (gl). Models: We compare our approach to both whispersmall (pre-trained student) and whisper-large-v2 (teacher) models, as well as two approaches of fine-tuning (FT) for the student: standard fine-tuning (all weights are updated), and LoRA adaptation on top of the feed-forward layer. Finally, we also investigate the impact of the CLSR layer without the use of KD (CLSR-FT), decoupling the effect of KD from the flexibility offered by the routing mechanism on the consequent robustness of the model. Implementation: We train all models using the Transformers library [32], and make use of whisper-small and 3Although Arabic would also qualify considering our criteria, we find that the dialect from FLEURS differs from the ones present on CV-13. whisper-large-v2 pre-trained weights from HuggingFace.4 All models are trained for 10 epochs using learning rate 10\u22124 with linear decay, one epoch warm-up, batch size 16, and label smoothing factor 0.1. For LoRA, we use the hyperparameters proposed by [28]. For CLSR training we set gate budget b = 0.5 and skip-gate probability s = 0.2. For KD we employ JS divergence with temperature \u03c4 = 1, weighted such as the learning objective is L = LCE + Lg + 2LKD. We report normalized WER using the Whisper normalization with a slight modification to avoid splitting numbers and latin-scripted text into individual characters in languages that do not use space delimitation (th). In all cases, the best model is chosen based on WER on the down-sampled CV-13 validation set. 5. RESULTS We conduct training for each setting using three distinct seeds and present the average scores. Table 1 presents our results. The top portion presents whisper-large-v2 (upper bound) and whisper-small (lower bound) pre-trained scores. The middle portion presents standard fine-tuning (FT) and LoRA adaptation at the feed-forward layers (LoRA-FT). Our results are presented in the bottom: CLSR-FT corresponds to the setting without LKD, while DistilWhisper is the complete setting in which both CLSR and KD losses are leveraged. DistilWhisper versus other adaptation approaches. For whisper-small, we observe that both FT and LoRA-FT approaches (middle portion of Table 1) are able to improve performance on both inand out-of-domain test sets. However, for FT this boost in performance comes with the cost of language specialization. In contrast to that, LoRA-FT is a light adaptation technique that does not modify the pretrained representation. This method increases performance on both in-domain (avg -13.1) and out-of-domain (avg -3.5) test sets compared to whisper-small. DistilWhisper further improves performance over whisper-small (avg -15.3) and LoRA-FT (avg -2.2) for in-domain data. It also presents better out-of-domain adaptation capabilities compared to LoRA-FT (avg -2.1). 4https://huggingface.co/openai/ \fFLEURS CV-13 FLEURS (out-of-domain) CV-13 (in-domain for FT only) #params avg avg ca cs gl hu pl ta th uk ca cs gl hu pl ta th uk whisper-large-v2 1.5B 12.5 14.9 5.6 14.3 16.6 17.9 5.9 19.3 12.2 8.1 16.9 14.4 18.9 18.7 8.0 17.3 9.2 15.5 whisper-small 244M 28.3 31.4 14.6 40.4 32.7 43.0 16.7 36.0 22.8 20.5 30.1 38.4 35.5 45.6 18.6 30.0 20.3 32.3 whisper-small+FT 244M 23.3\u00b10.06 16.3\u00b10.09 15.5 31.0 16.9 36.7 22.0 22.7 15.6 25.9 13.7 20.5 11.3 24.1 16.3 13.6 7.4 23.4 whisper-small+LoRA-FT 379M 24.9\u00b10.07 18.2\u00b10.02 17.6 36.9 18.2 41.6 25.9 15.2 11.7 31.8 14.0 23.7 12.7 28.0 21.2 12.0 7.9 26.4 whisper-small+CLSR-FT 369M 23.4\u00b10.19 16.3\u00b10.08 15.7 30.5 17.2 36.9 22.8 22.7 15.6 25.8 14.1 20.3 11.6 24.3 16.1 13.3 7.4 23.4 DistilWhisper 369M 22.8\u00b10.21 16.0\u00b10.04 15.3 30.2 16.7 36.9 21.4 21.8 15.1 24.9 13.8 20.0 11.8 24.0 15.9 12.6 7.2 23.1 Table 1. WER (\u2193) with dataset averages (avg) for baselines (top), adaptation approaches (middle), and our method (bottom) for in-domain (CV-13, FT only) and out-of-domain (FLEURS, all) test sets. Best results for whisper-small in bold. Train FLEURS CV-13 FLEURS CV-13 size avg avg ca ta th ca ta th whisper-small+CLSR-FT 3k 20.5\u00b10.17 15.0\u00b10.07 17.9 25.6 18.0 19.0 16.4 9.8 DistilWhisper 3k 20.2\u00b10.13 14.6\u00b10.08 17.4 25.5 17.7 18.7 15.7 9.6 whisper-small+CLSR-FT 10k 18.0\u00b10.25 11.6\u00b10.01 15.7 22.7 15.6 14.1 13.3 7.4 DistilWhisper 10k 17.4\u00b10.13 11.2\u00b10.08 15.3 21.8 15.1 13.8 12.6 7.2 whisper-small+CLSR-FT 28k 15.7\u00b10.15 9.5\u00b10.13 13.5 19.8 13.9 11.3 11.3 6.0 DistilWhisper 28k 15.5\u00b10.03 9.3\u00b10.06 13.3 19.3 13.7 11.3 11.0 5.7 Table 2. Average WER (\u2193) for different training data sizes (3k, 10k, and 28k utterances) for in-domain (CV-13) and out-of-domain (FLEURS) test sets. Best results in bold. Impact of knowledge distillation. We observe that DistilWhisper on average outperforms all other adaptation approaches (FT, LoRA-FT) for inand out-of-domain test sets (bottom portion of Table 1). Comparing our models (CLSR-FT and DistilWhisper), we observe that the version with KD (DistilWhisper) exhibits a slight increase in average in-domain performance (-0.3). In out-of-domain settings, this model consistently outperforms CLSR-FT across all languages (avg -0.6), which confirms our initial hypothesis that the KD loss leverages the robustness from the teacher into the final model. Overall, these results highlight the effectiveness of our proposed architecture: we were able to reduce the out-of-domain performance gap between whisperlarge-v2 and whisper-small by 35.2% (avg -5.5) with a parameter overhead at inference time of only 10% (25 M). Effect of training data size. We now show the effectiveness of our approach on lower and higher data resource settings. For this, we select a subset of languages for which we find more training data available on CV-13 (ca, th, ta). Table 2 presents results for our approach in low (3k utterances; \u223c4 h), and higher-resource settings (28k utterances; \u223c40 h), compared to the 10k results from Table 1. We observe that, as expected, increasing the amount of trainable examples leads to superior ASR performance for both approaches, with the leveraging of KD (DistilWhisper) being consistently superior to CLSR-FT. For the 28k setup (ca, th, ta), we are able to reduce the out-of-domain WER gap between whisperlarge-v2 and whisper-small by 75% (from 12 to 3 WER).5 For the 3k setup, we reduce the WER gap by 35.8% using only 4 h of training data. This implies that 5whisper-large-v2 and whisper-small avg FLEURS scores for ca, th, ta are respectively 12.5 and 24.5. 3k 10k 28k 0.3 0.4 0.5 LS Activation (%) Catalan 3k 10k 28k Thai FLEURS CV-13 3k 10k 28k T amil CLSR FT DistilWisper Fig. 2. Ratio of LS layers chosen by the models (CLSR-FT and DistilWhisper) depending on (1) amount of training data; (2) in (CV-13) or out-of-domain (FLEURS); (3) language. our approach has the potential to improve ASR performance across low-resource languages for which less training data is available. Gate Activation Analysis. To better understand how the model uses routing mechanism, we plot gate activation statistics for both CLSR-FT and DistilWhisper on Figure 2. We observe that the models tend to rely more on the new LS modules in out-of-domain settings (FLEURS vs CV-13), which could be attributed to the greater complexity and larger size of sentences in FLEURS. Also, as expected, increasing the training data size leads to more reliable LS modules, and therefore higher LS usage. The only exception for this is Thai at the 28k setup, and this might be due to dataset quality and requires further investigation. When comparing the 3 languages, we observe that Catalan exhibits a higher reliance on LS routes, which could also be related to the data quality for this language in CV-13. Finally, we observe that for languages with a weaker teacher (Thai, Tamil) the model may receive contradictory signals at lower-resource settings (3k, 10k), leading to less LS routing usage with KD. However, in the higher resource setting (28k), KD usage leads systematically to more reliable LS module and therefore higher LS routing. 6." + } + ], + "Marcely Zanon Boito": [ + { + "url": "http://arxiv.org/abs/2205.01987v1", + "title": "ON-TRAC Consortium Systems for the IWSLT 2022 Dialect and Low-resource Speech Translation Tasks", + "abstract": "This paper describes the ON-TRAC Consortium translation systems developed for\ntwo challenge tracks featured in the Evaluation Campaign of IWSLT 2022:\nlow-resource and dialect speech translation. For the Tunisian Arabic-English\ndataset (low-resource and dialect tracks), we build an end-to-end model as our\njoint primary submission, and compare it against cascaded models that leverage\na large fine-tuned wav2vec 2.0 model for ASR. Our results show that in our\nsettings pipeline approaches are still very competitive, and that with the use\nof transfer learning, they can outperform end-to-end models for speech\ntranslation (ST). For the Tamasheq-French dataset (low-resource track) our\nprimary submission leverages intermediate representations from a wav2vec 2.0\nmodel trained on 234 hours of Tamasheq audio, while our contrastive model uses\na French phonetic transcription of the Tamasheq audio as input in a Conformer\nspeech translation architecture jointly trained on automatic speech\nrecognition, ST and machine translation losses. Our results highlight that\nself-supervised models trained on smaller sets of target data are more\neffective to low-resource end-to-end ST fine-tuning, compared to large\noff-the-shelf models. Results also illustrate that even approximate phonetic\ntranscriptions can improve ST scores.", + "authors": "Marcely Zanon Boito, John Ortega, Hugo Riguidel, Antoine Laurent, Lo\u00efc Barrault, Fethi Bougares, Firas Chaabani, Ha Nguyen, Florentin Barbier, Souhir Gahbiche, Yannick Est\u00e8ve", + "published": "2022-05-04", + "updated": "2022-05-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "main_content": "Introduction The vast majority of speech pipelines are developed for and in high-resource languages, a small percentage of languages for which there is a large amount of annotated data freely available (Joshi et al., 2020). However, the assessment of systems\u2019 performance only on high-resource settings can be problematic because it fails to re\ufb02ect the realworld performance these approaches will have in diverse and smaller datasets. In this context, the IWSLT 2022 (Anastasopoulos et al., 2022) proposes two interesting shared tasks: low-resource and dialect speech translation (ST). The former aims to assess the exploitability of current translation systems in data scarcity settings. The latter focuses on the assessment of the systems capabilities in noisy settings: different dialects are mixed in a single dataset of spontaneous speech. For the low-resource task, this year\u2019s language pairs are: Tamasheq-French and Tunisian Arabic-English. The latter is also used, in constrained conditions, for the dialect task. This paper reports the ON-TRAC consortium submissions for the mentioned tasks. The ON-TRAC Consortium is composed of researchers from three French academic laboratories, LIA (Avignon University), LIUM (Le Mans University) and LIG (University Grenoble Alpes), together with two industrial partners: Airbus France and ELYDATA. Our systems for the dialect task focus on the comparison between cascaded and end-to-end approaches for ST. For the low-resource task, we focus on the leveraging of models based on self-supervised learning (SSL), and on the training of ST models with joint automatic speech recognition (ASR), machine translation (MT) and ST losses. This paper is organized as follows. Section 2 presents the related work. The experiments with the Tunisian Arabic-English dataset for lowresource and dialect ST tasks are presented in Section 3. Results for the Tamasheq-French dataset for the low-resource track are presented in Section 4. Section 5 concludes this work. 2 Related work Before the introduction of direct or end-to-end ST models (Berard et al., 2016; Weiss et al., 2017), the ST task was approached as a cascaded problem: the speech is transcribed using an ASR model, and \fthe transcriptions are used to train a classic MT model. The limitations of this approach include the need for extensive transcriptions of the speech signal, and the error propagation between ASR and MT modules. In comparison to that, end-toend ST models propose a simpler encoder-decoder architecture, removing the need for intermediate representations of the speech signal. Although at \ufb01rst, cascaded models were superior in performance compared to end-to-end models, results from recent IWSLT campaigns illustrate how endto-end models have been closing this gap (Ansari et al., 2020; Bentivogli et al., 2021; Anastasopoulos et al., 2021). Moreover, the joint optimization of ASR, MT and ST losses in end-to-end ST models was shown to increase overall performance (Le et al., 2020; Sperber et al., 2020). SSL models for speech processing are now a popular foundation blocks in speech pipelines (Schneider et al., 2019; Hsu et al., 2021; Baevski et al., 2019, 2020). These models are large trainable networks with millions, or even billions (Babu et al., 2021), of parameters that are trained on unlabeled audio data only. The goal of training these models is providing a powerful and reusable abstraction block, which is able to process raw audio in a given language or in multilingual settings (Conneau et al., 2020; Babu et al., 2021), producing a richer audio representation for the downstream tasks to train with, compared to surface features such as MFCCs or \ufb01lterbanks. Recent work found considerable performance gains and/or state-of-the-art performance by including these blocks in their target tasks, and more importantly, the \ufb01nal models can be trained with a smaller amount of labeled data, increasing the accessibility of current approaches for speech processing (Kawakami et al., 2020; Schneider et al., 2019; Hsu et al., 2021; Baevski et al., 2019, 2020).1 3 Tunisian Arabic-English Experiments In this section we present our experiments for translating Tunisian Arabic to English in the context of the dialect and low-resource tasks from IWSLT 2022. Section 3.1 describes the data used in our experiments. We investigate two types of ST architec1Recent benchmarks for SSL models can be found in Evain et al. (2021b,a); wen Yang et al. (2021); Conneau et al. (2022). tures: end-to-end architectures (Section 3.3), and pipeline models (Section 3.2). For the latter, we include the obtained ASR results. For both, results on the ST tasks are presented in Section 3.4. 3.1 Data The Tunisian Arabic dataset (LDC2022E01) use in our experiments was developed and provided by LDC2 to the IWSLT 2022 participants. It comprises 383 h of Tunisian conversational speech with manual transcripts, from which 160 h are also translated into English. Thus, it is a three-way parallel corpus (audio, transcript, translation). This LDC data consistitutes basic condition of the dialect task. Arabic dialects are the informal form of communication in the everyday life in the Arabic world. Tunisian Arabic is one of several Arabic dialects: there is no standard written Arabic form for this language that is shared by all Tunisian speakers. Nevertheless, the transcripts of Tunisian conversations of the LDC2022E01 Tunisian Arabic dataset follow the rules of the Tunisian Arabic CODA \u2013 Conventional Orthography for Dialectal Arabic. For the dialect adaptation condition, we use in addition to the LDC2022E01 dataset, the MGB2 dataset (Ali et al., 2016), which is composed of 1,200 h of broadcast news audio recordings in modern standard Arabic (MSA) from Aljazeera TV programs. These recordings are associated to captions with no timing information: they are not verbatims of the speech content, and can be an approximation. The MGB2 dataset also contains the automatic transcriptions generated by the Qatar Computing Research Institute (QCRI) ASR system. This external dataset is used for training our ASR systems. 3.2 Pipeline ST For our pipeline ST models, we experiment with two different ASR architectures, presented in Section 3.2.1. We also train two MT models, presented in Section 3.2.2. 3.2.1 ASR system End-to-end ASR model. Our end-to-end ASR system is implemented on the SpeechBrain toolkit (Ravanelli et al., 2021). It is composed of a wav2vec 2.0 module, a 1024-dimension dense hidden layer with a Leaky ReLU activation function, and a softmax output layer. The weights of 2https://www.ldc.upenn.edu/ \fthe wav2vec 2.0 module were initialized from the XLSR-53 model released by Meta (Conneau et al., 2020). The CTC loss function (Graves et al., 2006) was used during the training process, and two different instances of Adam (Kingma and Ba, 2015) optimizers were used to manage the weight updates: one dedicated to the wav2vec 2.0 module, the other one to the two additional layers. The output of the end-to-end model is based on characters. The training of our model is separated in two stages. First, we train an end-to-end ASR model in MSA using the MGB2 data. To process this data, we used a dictionary of 95 characters (i.e. 95dimensional output layer). Among the 1,200 h of speech associated to captions and automatic transcripts in the MGB2 dataset, we keep only the audio segments for which the captions and the automatic transcripts are strictly the same. This corresponds to roughly 820 h of speech. Once our model in standard Arabic is trained, we use it to initialize our \ufb01nal Tunisian Arabic ASR model. The architecture is kept the same, excluding the 34-dimensional output layer, and we randomly reinitialise the weights of the 2 last layers. In other words, we keep only the weights of the ASR MGB2 \ufb01ne-tuned wav2vec 2.0 model, performing transfer learning from MSA to Tunisian Arabic. We then train the end-toend ASR model on the Tunisian audio data of the LDC2022E01 dataset and its normalized transcription. Lastly, we train a 5-gram language model (LM) on the normalized transcriptions. Hybrid HMM/TDNN ASR system. In addition to the end-to-end ASR system describe above, we train a Kaldi-based system (Povey et al., 2011). The acoustic model uses chain models with the TDNN architecture and 40-dimensional high-resolution MFCCs extracted from frames of 25 ms length and 10 ms shift, applying usual data augmentation methods: speed perturbation at rates of 0.9, 1.0, and 1.1, and spectral augmentation. We employ a graphemic lexicon of 88k words, and we use a 3-gram LM built using the SRILM toolkit (Stolcke, 2002) with the KneserNey smoothing. This 3-gram LM is trained using the transcripts of the training set and the vocabulary covering all the words of the graphemic lexicon. ASR performance. Tunisian Arabic ASR results for 3 different models are presented in TaSystem Description valid test primary E2E w/o LM 41.1 45.1 not submitted HMM/TDNN 50.3 post-evaluation E2E + 5-gram 38.3 41.5 Table 1: Results for Tunisian Arabic ASR systems in terms of WER. Submissions to the low-resource track. ble 1. The primary system is the end-to-end ASR model described above, without LM rescoring. The second row presents the result for the hybrid HMM/TDNN system. Due to its lower performance on the validation data in comparison to the end-to-end system, we decided to not submit this system. The last row presents the results for the end-to-end ASR with the 5-gram LM, a postevaluation result. 3.2.2 MT model We train two MT models using the fairseq toolkit (Ott et al., 2019). The \ufb01rst model (contrastive1) is an bi-LSTM model from Luong et al. (2015), trained using the lstm_luong_wmt_en_de recipe3. Both encoder and decoder consists of 4 LSTM layers, and the input is at the sub-word level using a BPE vocabulary of 8,000 units, trained on the target language. The second model (contrastive2) is a fully convolutional model following the fconv_wmt_en_fr4 sequence-to-sequence architecture from Gehring et al. (2017). It consists of 15 encoder and decoder layers, working on the sub-word level with input and output vocabularies of 4,000 BPE units. 3.3 End-to-end ST The end-to-end ST model is a Conformer model (Gulati et al., 2020) based on the Espnet toolkit (Watanabe et al., 2018). This system is trained using 80-channel log-mel \ufb01lterbank features computed on a 25 ms window with a 10 ms shift. We also use speed perturbation at ratio 0.9, 1.0, 1.1 and SpecAugment (Park et al., 2019) with 2 frequency masks and 5 time masks. In addition, a global Cepstral Mean and Variance Normalization (CMVN) technique is applied on the top of 3https://fairseq.readthedocs.io/en/ latest/_modules/fairseq/models/lstm.html 4https://fairseq.readthedocs.io/en/ latest/models.html \fSystem Track Description valid test primary LR/D End-to-end 12.2 12.4 contrastive1 LR Cascade 15.1 13.6 contrastive2 LR Cascade 12.8 11.3 post-evaluation LR Cascade 16.0 14.4 Table 2: Results for Tunisian Arabic to English translation systems in terms of %BLEU for low-resource (LR) and dialect (D) tracks. our features. Our Conformer model consists of a 6-block Conformer encoder and a 6-block Transformer decoder. We use 1,000 BPE as the modeling units. The model is trained for 100 epochs and the last 10 best checkpoints are averaged to create the \ufb01nal model. 3.4 Results Table 2 presents our ST results for dialect and low-resource tracks. Our primary system for both tracks is the end-to-end system presented in Section 3.3. The two pipeline systems, contrastive1 and contrastive2, are composed by the end-toend ASR model, and they vary on the MT model used (presented in Section 3.2.2). Since ASR models use external data (MGB2), these submissions are for the low-resource track only. Finally, the post-evaluation model is the composition of the post-evaluation end-to-end ASR model from Section 3.2.1, and the MT model from contrastive1. We observe that our cascaded models are very competitive compared against our end-to-end model (primary submission): our best ST result is obtained using the contrastive1. The postevaluation model, which adds an 5-gram LM on the end-to-end ASR module, achieves even better scores. We believe that part of the reason this model is effective is the addition of the data in MSA from the MGB2 dataset, that is used to pre-train the end-to-end ASR model. Thus, the comparison between our cascaded and end-to-end models is not exactly fair, as out end-to-end model is trained on less data. Moreover, we would like to highlight that although this dataset is offered as part of the lowresource track, we do not consider this setting to be one of data scarcity: 160 h of translated speech are available. We do, however, \ufb01nd this dataset to be extremely complex to work with. That is because there are multiple regional dialects from Tunisia mixed in the data, which makes the ST task harder. These regional dialects differ mainly on their accent, but sometimes also in terms of vocabulary and expression. Nonetheless, we \ufb01nd that the real challenge for processing this data comes from its nature. This dataset is a collection of telephonic conversations, where the acoustic conditions can be sometimes very challenging: some phone calls are made from mobile phones in very noisy environments, and sometimes some portions of audio recordings are saturated because of sudden high audio input gain. By computing the WER on each audio recording in the validation set using our best ASR model, we observe that the lowest one achieved is 18.3%, while the highest one is 88.5%. Thus, we achieve a global WER of 38.3% (post-evaluation in Table 1), with a standard deviation is 12.3%. This illustrates the high variability in terms of audio quality that might exist in this dataset. 4 Tamasheq-French Experiments In this section we present our experiments for the Tamasheq-French dataset in the context of the low-resource ST track. This dataset, recently introduced in Boito et al. (2022), contains 17 h of speech in the Tamasheq language, which corresponds to 5,829 utterances translated to French. Additional audio data was also made available through the Niger-Mali audio collection: 224 h in Tamasheq and 417 h in geographically close languages (French from Niger, Fulfulde, Hausa, and Zarma).5 For all this data, the speech style is radio broadcasting, and the dataset presents no transcription. Our experiments are separated in two different investigation branches: 1. The exploitation of SSL wav2vec 2.0 models (Baevski et al., 2020) for low-resource direct speech-to-text translation; 2. The production of approximate phonetic transcriptions for attenuating the challenge of training in low-resource settings. We start by presenting the models proposed for the \ufb01rst branch: the SSL models pre-trained and/or \ufb01ne-tuned for Tamasheq in Section 4.1, the pipeline experiments that use wav2vec 2.0 models as feature extractors in Section 4.2, and our 5https://demo-lia.univ-avignon.fr/ studios-tamani-kalangou/ \fprimary system, an end-to-end architecture that directly \ufb01ne-tunes a wav2vec 2.0 model, in Section 4.3. Section 4.4 focuses on the second branch of experiments, presenting our contrastive model that is based on the joint optimization of ASR, MT and ST losses. This is made possible by the use of a French ASR system for generating an approximated phonetic transcription of the Tamasheq audio. In Section 4.5, we present and discuss our results, and lastly, Section 4.6 describes some lesssuccessful experiments. 4.1 SSL models Pre-trained models. We train two wav2vec 2.0 base models using the Niger-Mali audio collection. The Tamasheq-only model uses the 224 h in Tamasheq, and the Niger-Mali model uses all the data available: 641 h in \ufb01ve languages. Additionally, we include in the training data for both models the 19 h present in the full release of the Tamasheq-French corpus.6 Therefore, both models are pre-trained on the target data. For training them, we use the same hyperparameters from the original wav2vec 2.0, as well as the original fairseq (Ott et al., 2019) implementation. These models are trained until 500k updates on 16 Nvidia Tesla V100 (32GB), and they are available for download at HuggingFace.7 Fine-tuned models. We experiment with the 7K large French wav2vec 2.0 model (LB-FR7K) from the LeBenchmark (Evain et al., 2021b), and the multilingual XLSR-53 (Conneau et al., 2020). Both models are \ufb01ne-tuned on the 243 h of Tamasheq (224 h +19 h) for approximately 20k updates on 4 Nvidia Tesla V100 (32GB). Finally, using the Tamasheq-only model, we also experiment \ufb01ne-tuning it for the ASR task in MSA (primary ASR model from Section 3.2). 4.2 Pipeline SSL+ST models Our models are very close to the recipe for lowresource ST from wav2vec 2.0 features described in Evain et al. (2021a). We use the fairseq s2t toolkit (Wang et al., 2020) for training an end-toend ST Transformer model (Vaswani et al., 2017) with 4 heads, dimensionality of 256, inner projection of 1,024, 6 encoder and 3 encoder layers. The Transformer is preceded by a 1D convolu6https://github.com/mzboito/ IWSLT2022_Tamasheq_data 7https://huggingface.co/ LIA-AvignonUniversity tional layer (k=5, stride=2) for down-projecting the wav2vec 2.0 large (1,024) or base (768) features into the Transformer input dimensionality. These models are trained for 500 epochs using the Adam optimizer (Kingma and Ba, 2015) with 10k warm-up steps. For decoding, we use beam search with a beam size of 5. For these models and the ones from Section 4.3, we generate a 1k unigram vocabulary for the French text using Sentencepiece (Kudo and Richardson, 2018), with no pre-tokenization. Lastly, we include baseline results that replace wav2vec 2.0 features by 80-dimensional mel \ufb01lterbank (MFB) features. In this setting, the CNN preceding the transformer encoder is identical from the one in Evain et al. (2021a). 4.3 End-to-end SSL+ST models Training an end-to-end ST model from a pretrained speech encoder was \ufb01rst proposed in Li et al. (2021). In this work, our end-to-end ST model is similar to the end-to-end ASR model presented in Section 3.2.1. It is also implemented on SpeechBrain, and it comprises a wav2vec 2.0 as speech encoder, followed by a linear projection, and the Transformer Decoder from Section 4.2. The weights for the wav2vec 2.0 speech encoder are initialized from one of the models in Section 4.2, and the model is trained on the NLL loss. As in Section 3.2, two different instances of the Adam optimizer manage the weight updates: one dedicated to the wav2vec 2.0 module, the other one to the following layers. Inspired by the layer-wise investigation for wav2vec 2.0 models described in Pasad et al. (2021), we explore reducing the number of layers in the Transformer encoder that is internal to the wav2vec 2.0 module. This is based on their \ufb01nding that the Transformer encoder behaves in an auto-encoder fashion and therefore, the intermediate representations might contain a higher level of abstraction from the speech signal. In their work, they show that re-initializing the weights of the \ufb01nal Transformer Encoder layers increases performance in ASR \ufb01ne-tuning. Different from that, we propose to remove these layers altogether, which we believe is bene\ufb01cial for low-resource ST \ufb01ne-tuning for two reasons. First, a reduced wav2vec 2.0 module will still have considerable capacity for encoding the speech, and second, this reduction in number of trainable pa\frameters might facilitate training. For implementing this model, we simply drop the N \ufb01nal encoder layers from our training graph, keeping the \ufb01nal projection. We refer to this architecture as W2V-N+ST, where N is the number of layers, starting from the \ufb01rst, kept during ST training. 4.4 End-to-end ASR+ST models We investigate a ST architecture that jointly optimizes ST, MT and ASR losses, as in Le et al. (2020). For this evaluation campaign however, no Tamasheq transcript nor phonetic transcription was provided, so we create an approximate phonetic transcription (Section 4.4.1) that we use in our end-to-end joint system for ST (Section 4.4.2). 4.4.1 Phonetic transcription for Tamasheq The Tamasheq is a Tuareg language spoken by around 500 thousand speakers, mainly from northern Mali. Its phonological system contains 5 vowels (+2 short vowels) and approximately 21 consonants if we ignore the 6 consonants of Arabic origin that are of marginal use (mostly for loanwords) (Heath, 2005). This leads to a set of 26 phonemes. Almost all of those phonemes appear to occur in French, which contains 36 phonemes, 16 vowels, 17 consonants and 3 glides. This motivates to use a phonetizer pretrained on French in order to \u201ctranscribe\u201d the Tamasheq signal into a sequence of pseudo-Tamasheq phonemes. A phonetic force alignment using a pre-trained Kaldi (Povey et al., 2011) chainTDNN acoustic model was used, followed by an ASR system trained using ESPNet (Watanabe et al., 2018). The model is trained on MFB features, and it uses 12 blocks of Conformer (Gulati et al., 2020) encoders, followed by 6 blocks of Transformer decoders. It uses a hybrid loss between attention mechanism and CTC (Graves et al., 2006). The French corpus is composed of approximately 200 h coming from ESTER1&2 (Galliano et al., 2009), REPERE (Giraudel et al., 2012) and VERA (Goryainova et al., 2014). No LM was used, and the phoneme error rate achieved on the ESTER2 test corpus is of 7,7% (silences are not ignored). We highlight that there is no simple automatic way to evaluate the quality of the phonetic transcriptions we generated on Tamasheq. We however, manually veri\ufb01ed some transcriptions and System Description valid test primary E2E, W2V-6+ST 8.34 5.70 contrastive E2E, ASR+ST 6.40 5.04 contrastive2 pipeline, W2V-ASR+ST 3.62 3.17 contrastive3 pipeline, W2V-FT+ST 2.94 2.57 baseline pipeline 2.22 1.80 Table 3: Results for the pipeline and end-to-end (E2E) Tamasheq-French ST systems in terms of %BLEU score. The \ufb01rst two rows present our submitted systems, while the reminder are complementary postevaluation results. con\ufb01rmed that they seemed to be of overall good quality. 4.4.2 Architecture The system is based on the ESPNet2 (Inaguma et al., 2020) ST recipe.8 This end-to-end model is made of 12 blocks of conformer encoders (hidden size of dimension 1024), followed by 3 blocks of transformer decoders (hidden size of dimension 2048). Input features are 512-dimensional MFB features extracted from the wave signal. Three losses are jointly used for training, as described in Equation 1. There, LST is the loss for Tamasheq speech to French text translation; LMT is the loss for Tamasheq pseudo-phonetic transcription to French text translation; and LASR is the loss for Tamasheq speech to Tamasheq pseudophonetic transcription. L = 0.3 \u00d7 LST + 0.5 \u00d7 LMT + 0.2 \u00d7 LASR (1) 4.5 Results Results are presented in Table 3. Our primary submission (W2V-6+ST) uses the Tamasheq-only wav2vec 2.0 base model, with only 6 transformer encoder layers (from a total of 12). Results with different numbers of layers are present in the Appendix A.1. Our contrastive submission is the end-to-end model from Section 4.4. Finally, the three last rows present complementary results, including a baseline trained on MFB features, and two pipeline models. The contrastive2 uses the Tamasheq-only wav2vec 2.0 model \ufb01ne-tuned for the Arabic ASR task from Section 3.2 as feature extractor, while contrastive3 extracts features from the Niger-Mali wav2vec 2.0 base model \ufb01netuned on Tamasheq. Other pipeline SSL+ST mod8https://github.com/espnet/espnet/tree/ master/espnet2/st \fels achieved lower scores, and their results are grouped in Appendix A.2. Looking at our results, and concentrating on SSL models, we notice that models that use wav2vec 2.0 as feature extractor (contrastive2 and contrastive3) achieve better performance compared to a baseline using MFB features. However, this \ufb01nding does not hold for the wav2vec 2.0 large models \ufb01ne-tuned on Tamasheq (XLSR-53 and LB-FR-7K), which scored as poorly as our baseline (results in Appendix A.2). We \ufb01nd this result surprising, specially in the case of the multilingual model (XLSR-53). This could mean that these large models are not useful as feature extractors for low-resource settings, even after taskagnostic \ufb01ne-tuning on the target language. Regarding the \ufb01ne-tuning procedure, as in Evain et al. (2021a), we notice that ASR \ufb01netuning is more bene\ufb01cial to ST than task-agnostic \ufb01ne-tuning: contrastive2 achieves better scores compared to contrastive3. We \ufb01nd this result interesting, considering that the ASR \ufb01ne-tuning performed in this case did not targeted Tamasheq, but MSA. This could mean that, when languages are suf\ufb01ciently similar, ASR \ufb01ne-tuning in a different language could be performed for increasing the performance on a low-resource language without transcripts. Regarding our primary system, we found better results by reducing the amount of trainable encoder layers inside the wav2vec 2.0 module. We also investigated freezing it partially or entirely during end-to-end ST training, but this resulted in performance decrease in the validation set. Regarding the different wav2vec 2.0 models trained (Section 4.1), and focusing on our primary model, we \ufb01nd that similar to pipeline SSL+ST models, we achieved our best results with base architectures (Tamasheq-only and NigerMali). Close seconds to the performance obtained with our primary model (on the validation set) were the models using the same wav2vec 2.0 modules from contrastive2 and contrastive3. These results indicate that having a dedicated wav2vec 2.0 model trained on the target or on close languages is indeed better than \ufb01netuning large monolingual (LB-FR-7K) or multilingual (XLSR-53) models.9 This is particularly interesting considering that the Tamasheq9By close we mean: (1) languages that are geographically close and with a known degree of lexical borrowing; (2) similar speech style and recording settings. only model is trained with only 234 h of speech, whereas XLSR-53 learned from approximately 56 thousand of hours. We believe that more investigation is necessary in order to con\ufb01rm the observed trend. Finally, we \ufb01nd the gap between the primary\u2019s performance in validation and test sets surprising, and we intend to investigate this further as well. Concluding, the contrastive model we propose in our submission presents a different approach for low-resource ST. By creating an approximate transcription of the Tamasheq audio, we are able to train more effectively, reaching a performance close to our primary model for the test set. This illustrates how transcriptions can be an effective form of increasing performance in low-resource settings, even when these are automatically generated. A possible extension of this work would be the combination of our primary and contrastive models: by inserting the primary\u2019s wav2vec 2.0 speech encoder into the training framework from the contrastive model, one can hypothesize that we could achieve even better scores. 4.6 Other Approaches XLS-R ST model. During development, we tried to apply XLS-R for translation (Babu et al., 2021), using the implementation available on the HuggingFace.10 In this approach, we aimed to use the pre-trained model, that is trained on 21 source languages with one target language (English), called wav2vec2-xls-r-300m-21-to-en to \ufb01rst translate the Tamasheq validation set to English. Then, as a second step, to translate the English system output to French. However, we observed that the decoder, based on a mBART (Liu et al., 2020), repeated several groups of tokens during decoding of up to hundreds of times. For example, the phrase: \u201cthe sun was shining in the sky\u201d for the sentence: \u201cIn the evening, the sun was shining in the sky, and the sun was shining in the sky...\u201d was repeated 32 times. This illustrates that out-of-shelf models can still fail to provide decent results in zero-shot settings. ST \ufb01ne-tuning for large wav2vec 2.0 models. All end-to-end models described in Section 4.3 are trained on a single Nvidia Tesla V100 (32GB). This limited our investigation using large wav2vec 2.0 models, since these only 10https://huggingface.co/facebook/ wav2vec2-xls-r-300m-21-to-en \f\ufb01t in this size of GPU after extreme reduction of the decoder network. Therefore, we \ufb01nd dif\ufb01cult to assess if the inferior performance of these large end-to-end models is due to the architecture size, or due to the speech representation produced by the wav2vec 2.0 models. In any case, reducing the number of encoder layers, and freezing some of the initial ones, resulted in better performance. The attained scores were however still inferior compared to pipeline models. 5" + }, + { + "url": "http://arxiv.org/abs/2204.01397v2", + "title": "A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems", + "abstract": "Self-supervised models for speech processing emerged recently as popular\nfoundation blocks in speech processing pipelines. These models are pre-trained\non unlabeled audio data and then used in speech processing downstream tasks\nsuch as automatic speech recognition (ASR) or speech translation (ST). Since\nthese models are now used in research and industrial systems alike, it becomes\nnecessary to understand the impact caused by some features such as gender\ndistribution within pre-training data. Using French as our investigation\nlanguage, we train and compare gender-specific wav2vec 2.0 models against\nmodels containing different degrees of gender balance in their pre-training\ndata. The comparison is performed by applying these models to two\nspeech-to-text downstream tasks: ASR and ST. Results show the type of\ndownstream integration matters. We observe lower overall performance using\ngender-specific pre-training before fine-tuning an end-to-end ASR system.\nHowever, when self-supervised models are used as feature extractors, the\noverall ASR and ST results follow more complex patterns in which the balanced\npre-trained model does not necessarily lead to the best results. Lastly, our\ncrude 'fairness' metric, the relative performance difference measured between\nfemale and male test sets, does not display a strong variation from balanced to\ngender-specific pre-trained wav2vec 2.0 models.", + "authors": "Marcely Zanon Boito, Laurent Besacier, Natalia Tomashenko, Yannick Est\u00e8ve", + "published": "2022-04-04", + "updated": "2022-07-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "main_content": "Introduction Recently, models based on self-supervised learning (SSL) for speech processing [1, 2, 3, 4] emerged as popular foundation blocks in speech pipelines. These models are large trainable networks with millions or even billions [5] of parameters that are trained on unlabeled audio data, hence selfsupervised. The goal of training these models is providing a powerful and reusable abstraction block, able to process raw audio in a given language or in multilingual settings [6, 5], producing a richer audio representation for the downstream tasks to train with, compared to standard features such as MFCCs or \ufb01lterbanks. Recent work found considerable performance gains and/or state-of-the-art performance by including these blocks in downstream tasks. Most of them focused in automatic speech recognition (ASR) [7, 1, 2, 3, 4], but recent speech benchmarks [8, 9, 10] cover tasks such as speech translation (ST), spoken language understanding, emotion recognition from speech and more. Regarding the use of the self-supervised block in downstream tasks, they can be used either as: (1) a feature extractor, with no \ufb01ne-tuning of the trained weights during downstream task training being performed; or as (2) a speech encoder, with \ufb01ne-tuning of the entire model in an end-to-end fashion, together with the additional task-speci\ufb01c modules. However, independently of the approach used for \ufb01netuning, one can expect that the characteristics of the speech data used for pre-training may in\ufb02uence the performance of the downstream task models. In this work, we focus on possible gender bias introduced by unbalanced speech data used to pretrain SSL models. We train gender-speci\ufb01c wav2vec 2.0 [4] models for the French language, and we apply them, together with three off-the-shelf wav2vec 2.0 models with different degrees of gender balance, to two downstream tasks: ASR and ST. For the downstream tasks training, we use the mTEDx dataset [11], whose gender annotation for the French subset is also a contribution of this work. Moreover, we explore the aforementioned strategies (1) and (2) for ASR, and (1) for ST, aiming to also investigate their impact in the gender-speci\ufb01c performance of the task models. Our results show that the type of downstream integration matters. We observe lower overall performance using gender-speci\ufb01c pre-training before an ASR system based on strategy (1). However, when SSL models are used as feature extractors (2), the overall ASR and ST results follow more complex patterns. Gender bias in speech-to-text systems is de\ufb01ned as a systematic worse recognition for a gender category [12]. Pioneer work for ASR [13] found better performance on women\u2019s voices, while a preliminary research on YouTube automatic caption system found better recognition rate for male speech [14], and no gender difference in a follow-up study [15]. Recent work on hybrid ASR systems observed that gender imbalance in data could lead to decreased ASR performance on the gender category least represented [16], but a posterior work from the same authors observed that ASR trained on audio-books is rather robust to gender imbalance [17], and that other factors (such as random seed and individuals in the training data) have an important impact as well. Methodological work discussing how to measure fairness in ASR [18], and position papers on biases in ASR [19] were also published recently. Regarding gender bias in ST systems, recent work focused on the content of the generated text, rather than speech itself [20, 21]. To our knowledge, the only other investigation of gender bias in models for speech processing is the work of Meng et al. [22], but they did not experiment with wav2vec 2.0 SSL models, and did not consider ST and ASR tasks, evaluating downstram performance only on phoneme recognition. In addition, they did not compare strategies (1) and (2) mentioned earlier, and their SSL models were trained only on small subsets of Librispeech (100h), whereas we investigate with models trained on much more data. Lastly, we acknowledge that the de\ufb01nition of gender as a binary category is somehow reducing, but we \ufb01nd ourselves limited by the data and metadata available. This paper is organized as follows. Section 2 presents the data used for pre-training and downstream tasks, and Section 3 \fTable 1: Statistics for the male/female datasets used for SSL training on French speech. Duration written as hours:minutes. Dataset Duration # speakers Speech Style MLS [23] 520:13 / 576:29 80 / 98 Read Att Hack [24] 12:07 / 14:54 9 / 11 Acted / Emotional CaFE [25] 00:32 / 00:36 6 / 6 Acted / Emotional CFPP2000 [26] 00:11 / 01:41 2 / 4 Spontaneous ESLO2 [27] 17:06 / 16:57 68 / 120 Spontaneous EPAC [28] 413:41 / 385:52 Unknown Radio Broadcasts GEMEP [29] 00:24 / 00:26 5 / 5 Acted / Emotional PORTMEDIA [30] 19:08 / 19:50 84 / 109 Acted telephone dialogue TCOF [31] 10:47 / 11:22 117 / 162 Spontaneous NCCFr [32] 12:44 / 12:59 24 / 21 Spontaneous TOTAL (M/F) 1,006:59 / 1,041:11 describes the SSL models. Section 4 and 5 present respectively our ASR and ST results. Section 6 summarizes our \ufb01ndings. 2. Data Pre-training Data. For building gender-speci\ufb01c datasets for SSL training, we use the same data from the LeBenchmark [8, 9]. They gathered a massive amount of French audio of different speech styles, and with rich metadata information.1 We select all ten datasets that had gender information, which resulted in 1,041 h of female speech, and 1,006 h of male speech after down-sampling the EPAC dataset for keeping the total duration equivalent between both sets. Table 1 presents key statistics. Speech-to-text Data. For the speech-to-text downstream tasks, we use the mTEDx dataset [11]. Since there was no gender information available, we manually annotated the fr-fr corpus by checking the speaker names, and by watching some of the videos online. Thus, one contribution of this work is the gender annotation in the mTEDx fr-* corpus that is now included in its latest release.2 For ASR, we down-sample the fr-fr subset (172 h), creating a gender balanced subset: we sampled the data by gender, reaching roughly 38 h of gender-speci\ufb01c speech in the training set, which corresponds to a half of the total amount of female speech in the original content. We use only a half of this number because we also created 68 h genderspeci\ufb01c ASR subsets that we intend to compare against this one in future work focusing on gender-bias in ASR \ufb01ne-tuning. For the validation set of this balanced subset, the male speech was up-sampled using the unused male entries from the original training set. The test set was kept the same. For ST, we use the English, Portuguese and Spanish subsets (respectively fr-{en,pt,es}; 48 h, 35 h, 23 h).3 We highlight that the data for ST is a subset of the fr-fr: the validation and test sets are the same. Table 2 presents the statistics. 3. Self-Supervised Learning Models We train two gender-speci\ufb01c wav2vec 2.0 large models using the 1K datasets presented in Section 2, and using the same 1Data available at https://github.com/LeBenchmark/ NeurIPS2021/tree/main/data preprocessing 2Available at http://www.openslr.org/100 3The original paper [11] reports respectively 50 h, 38 h, 25 h, but we compute statistics on speech segments only (not full audio duration). Table 2: Statistics for the fr-fr mTEDx with gender annotation (M=male;F=female;B=speakers of both genders present), its balanced version (ASR), and the three ST subsets. Duration written as hours:minutes. Original Content (fr-fr) M F B All train # speakers 550 388 4 942 Duration 100:02 68:28 0:44 169:14 valid # speakers 5 7 12 Duration 0:38 1:00 1:38 test # speakers 6 4 10 Duration 0:54 0:39 1:33 Balanced Dataset (ASR) train # speakers 550 388 938 Duration 34:09 34:09 68:17 valid # speakers 11 7 18 Duration 0:30 0:30 1:00 Translation Datasets (ST) fr-en (train) # speakers 146 102 2 250 Duration 26:28 18:14 0:22 45:04 fr-es (train) # speakers 110 86 196 Duration 17:59 14:31 32:30 fr-pt (train) # speakers 57 55 112 Duration 10:39 9:22 20:01 Table 3: List of wav2vec 2.0 models, number of updates and hours used for pre-training. The last three columns present the percentage of male (M), female (F) and unknown gender (U) speech present in the pre-training dataset. Model # updates # hours M,% F,% U,% F-1K-Large 125K 1,041 0 100 0 M-1K-Large 125K 1,006 100 0 0 LB-1K-Large 200K 1,096 47.4 52.5 0 LB-3K-Large 500K 2,933 62.2 35.2 2.5 LB-7K-Large 500K 7,739 23.9 13.4 62.6 hyperparameters from the original wav2vec 2.0 [4]. We train them using the fairseq library [33], and for 125K updates on 16 Nvidia Tesla V100 (32GB).4 These gender-speci\ufb01c models are added to the collection of pre-trained wav2vec 2.0 models for the French language from the LeBenchmark (LB) [8, 9], and they are available for download at HuggingFace.5 In this work, we investigate the impact of gender distribution in SSL models\u2019 pre-training data, focusing on speech-to-text downstream tasks. We compare the gender-speci\ufb01c models described above against three models of equal capacity from the LB collection (1K-Large, 3K-Large and 7K-Large). These models are relevant because they present different degrees of gender balance in their pre-training data. A summary of all models is presented in Table 3. 4. Automatic Speech Recognition We experiment with two different ASR models: a hybrid deep neural network (DNN) hidden markov model (HMM), and an end-to-end model. For DNN-HMM models, the SSL block is 4Due to training instability in the fairseq library, we were unable to reach 200K updates on the gender-speci\ufb01c models. However, we observed that the trained models at 125K updates achieve a loss that is lower than the one achieved by the LB-1K-Large model on the same validation set. We thus believe that these models are comparable. 5https://huggingface.co/LeBenchmark \fTable 4: Hybrid (a) and end-to-end (b) ASR results (WER) using the wav2vec 2.0 models either as feature extractors (a) or speech encoders (b). Results computed on the mTEDx test set. (a) Hybrid ASR Pre-training WER \u2206rel, % M F All F-1K-Large 25.7 22.3 24.3 -14.2 M-1K-Large 25.4 23.4 24.8 -8.2 LB-1K-Large 25.9 22.9 24.7 -12.3 LB-3K-Large 22.1 20.9 21.5 -5.6 LB-7K-Large 23.1 21.3 22.3 -8.1 (b) End-to-end ASR F-1K-Large 20.9 17.7 19.5 -16.9 M-1K-Large 21.0 18.5 19.9 -12.7 LB-1K-Large 15.3 13.0 14.3 -16.6 LB-3K-Large 15.5 13.5 14.6 -13.9 LB-7K-Large 15.9 13.2 14.7 -19.0 used as a feature extractor, and for end-to-end models it is used as a trainable speech encoder. The performance is evaluated in terms of word error rate (WER). The relative difference of WERs between female and male datasets is computed as Equation 1, and it can be understood as a basic fairness metric. \u2206rel = 100 W ERfemale \u2212W ERmale 0.5 \u00d7 (W ERfemale + W ERmale) (1) 4.1. Hybrid ASR We trained \ufb01ve hybrid DNN-HMM acoustic models using features extracted by the SSL models described in Section 3. All models were trained on the balanced dataset (68 h) using the Kaldi toolkit [34] with a factorized time delay neural network (TDNN-F) architecture [35, 36]. The models have 12 TDNN-F layers (1,024-dimensional, with projection dimension of 128) and a 3K dimensional output layer. They were trained using lattice-free maximum mutual information (LF-MMI) [37] and cross-entropy criterion. Speed and volume perturbations have been applied for data augmentation, and 100-dimensional speaker i-vectors were appended to the input features. Finally, a trigram language model (LM) with a 82K vocabulary was used. Results are presented in the top portion (a) of Table 4. We observe that models trained on features extracted from genderspeci\ufb01c pre-trained models performed very closely to the one using features from a model with balanced pre-training (LB1K-Large model). Following intuition, we also observe that among the SSL models trained on 1K hours, the best results for each gender-speci\ufb01c dataset (M and F columns) were obtained when the gender of the SSL model matched the gender of the speakers in the dataset. However, similar to previous work [8, 9], we observe that training a feature extractor on more data (3K and 7K hours) is bene\ufb01cial for hybrid ASR, regardless of the pre-training data distribution (see Table 3). This relative low impact of biased pre-training data was also mentioned in Meng et al. [22] for phoneme recognition. Lastly, we notice that the relative difference of WER between female and male talks (\u2206rel) is not necessarily higher when gender-speci\ufb01c (male or female) pre-trained models are used (\u2206rel is -12.3% with the balanced pre-trained model 1K while it is -8.2% with the maleonly pre-trained model 1K). 4.2. End-to-end ASR Our \ufb01ve end-to-end ASR systems are implemented on the SpeechBrain toolkit [38], being each composed of a wav2vec 2.0 module, a 1024-dimension dense hidden layer with a Leaky ReLU activation function, and a softmax output layer. For each end-to-end model, the weights of the wav2vec 2.0 module were initialized from one of the pretrained models listed in Table 3. The CTC loss function [39] was used for training, and two different instances of Adam [40] optimizers managed the weight updates: one dedicated to the wav2vec 2.0 module, the other one to the two additional layers. The output of the end-to-end model is based on characters: the vocabulary is composed of the 102 lower-case symbols contained in the normalized manual transcriptions of the training set. The models were trained on the balanced dataset (68 h), and no LM was applied. Results are presented in the bottom portion (b) of Table 4. We observe that, different from the previous results (a), the performance of the end-to-end ASR models seems to be very dependent on the balance of the dataset used to pre-train the SSL models. In these experiments, the model based on the wav2vec 2.0 with balanced pre-training data (LB-1K-Large) resulted in the best results for both genders. Moreover, the models based on the gender-speci\ufb01c SSL models achieved poor performance, surprisingly even for the gender they targeted.6 These results illustrate that, when \ufb01ne-tuning an SSL model on the ASR task, the gender biases introduced during the pre-training are crucial for the downstream task, and cannot be \ufb01xed by including more data (inferior performance of 3K and 7K). It also seems very important to consider the variability of speakers during the pre-training step: our results showed that the presence of speech for a given gender in the pretraining dataset helps to better transcribe speech for the opposite gender. 5. Speech-to-Text Translation We focus on direct speech-to-text translation, without producing any source language transcription. We use the SSL block as feature extractor. Our ST models follow Evain et al. [9]: we use the fairseq s2t toolkit [41] with their s2t transformer xs architecture (Transformer [42] with 6 encoder layers, 3 decoder layers, hidden dimension of 256). Following common practice [41, 43], utterances with more than 3,000 frames are removed for GPU ef\ufb01ciency. All ST models are trained for 500 epochs using Adam [40] and learning rate of 2 \u00d7 10\u22123. We averaged last 10 checkpoints and used beam search (size of 5) decoding. Reported results are detokenized case-sensitive BLEU computed using sacreBLEU [44] on test set. No speci\ufb01c ASR or MT pre-training (nor data augmentation) is applied as our goal is not to obtain best results, but to analyze impact of SSL pre-training. For extracting the speech features used as input of our ST models, we use all models from Table 3. Table 5 presents overall and separate BLEU on [male, female] groups of TED talks, and the normalized relative difference of performance between female and male talks for all 15 ST models trained.7 For reference, we also include the reported results from the original mTEDx paper [11], which uses mel 6Due to a lack of space, we did not include all the 95% con\ufb01dence intervals. To give an idea of the statistical signi\ufb01cance of these results, notice that for the \ufb01rst model, column All: 24.7% WER \u2208[24.0, 25.5] in (a), and 14.3% WER \u2208[13.2, 15.3] in (b). 7Note that since BLEU is used, the sign of the relative difference will be positive if female scores are better than male scores. This is the opposite to the calculation on WER in Section 4. \fTable 5: Speech translation performance (BLEU) for each pre-trained model and each language pair. Results obtained on the test set of mTEDx. Scores in brackets show BLEU on separate [male, female] talks. \u2206rel = BLEU(female)\u2212BLEU(male) 0.5\u00d7(BLEU(female)+BLEU(male)) . Pre-training fr-en [M,F] \u2206rel, % fr-es [M,F] \u2206rel, % fr-pt [M,F] \u2206rel, % F-1K-Large 14.97 [14.34,15.71] +9.12 15.81 [15.71,15.99] +1.77 10.55 [12.00,8.56] -33.46 M-1K-Large 15.99 [15.90,16.11] +1.31 16.07 [15.55,16.75] +7.43 12.01 [13.21,10.5] -22.86 LB-1K-Large 13.25 [12.62,14.09] +11.01 13.69 [13.37,14.08] +5.17 8.96 [9.73,7.93] -20.39 LB-3K-Large 17.44 [17.24,17.69] +2.58 14.78 [14.84,14.63] -1.43 7.24 [8.07,6.12] -27.48 LB-7K-Large 17.50 [16.58,18.63] +11.64 16.34 [16.29,16.34] +0.31 8.81 [9.83,7.42] -27.94 Table 5 of [11] (bilingual e2e) 8.9 10.6 7.9 \ufb01lterbank features instead of SSL, but also some data augmentation. We observe that mTEDx dataset is challenging for direct ST (low results for all three subsets). The fr-pt results are particularly low, variable and counter-intuitive: 3K and 7K models reach a lower performance compared to 1K, while the opposite is observed for fr-en and fr-es. The same trend difference was observed in previous work [9], and we believe this might be sourced in the data scarcity for this language pair: only 20 h of speech available in the training set. Focusing on models with the same amount of pre-training data (1K), we observe medium variability of overall BLEU: for fr-en for instance, it ranges from 13.25 (balanced) to 15.99 (male), depending on the SSL model used to extract features. Similar to the hybrid ASR experiments ((a) in Table 4) and previous work [22], we do not observe a gender-related performance issue in downstream models by using extremely unbalanced SSL models (male and female) as feature extractors in ST. Counterintuitively, the BLEU obtained with ST models that used features from these models is even better than the one obtained with the balanced model. About the relative difference of BLEU between female and male talks (\u2206rel), this metric is not higher when gender-speci\ufb01c SSL models are used: for fren, \u2206rel is +11.01% with the balanced model, while it is only 1.31% with the male-only model. This reinforces that the SSL feature extractors are not causing gender-related performance gap. Moreover, we notice that the \u2206rel is very different from one language pair to another: it is positive for fr-en, and negative for fr-pt. This is particularly interesting considering that the test set is exactly the same, and only the target translation and the amount of training data differ. This suggests that there might exist other strong factors impacting ST performance, such as the target language, and gender distribution in the training sets. 6. Discussion Our assessment of gender bias in SSL models was based on two different forms of downstream integration. When using our SSL blocks as simple feature extractors (hybrid ASR and ST), we observe the same trend: results for gender-speci\ufb01c models were not worse than results with the balanced SSL model. This suggests that the wav2vec 2.0 features remain exploitable speech representations even if SSL models are trained on biased data. Further analysis is needed to understand the reasons behind this observation, but one possible explanation is that wav2vec 2.0 features contain less speaker-speci\ufb01c information. It was shown in Nguyen et al. [45] that speech representations obtained with contrastive predicting coding (an ancestor of wav2vec 2.0) are less speaker-speci\ufb01c, and maybe this aspect is ampli\ufb01ed by the quantization step that is part of the wav2vec 2.0 pipeline. Some more principled analysis such as the one of Pasad et al. [46], which studies layer-wise representations from the wav2vec 2.0, would be needed to con\ufb01rm this hypothesis. When the SSL block is used as a speech encoder in end-to-end ASR training, we \ufb01nd a different trend: using a well-balanced wav2vec 2.0 model leads to better overall performance. We also observe that all SSL models containing speakers from both genders in the pre-training data (1K, 3K, 7K) achieve better results than the gender-speci\ufb01c models. This result illustrates that the interaction between pre-training and \ufb01ne-tuning is complex. At this stage one can only formulate conjectures, but we hypothesize that a gender-balanced pre-training might provide a better initialization for the \ufb01ne-tuning process, which itself relies on both male and female speech. Regarding our basic \u2018fairness\u2019 metric (relative difference of performance measured between female and male test sets), it did not display strong variation from balanced to genderspeci\ufb01c pre-trained models. Many other factors may have more impact on performance such as language pair (for ST), amount of training data for \ufb01ne-tuning models (ASR, ST), speech-totext approach (hybrid ASR versus end-to-end ASR), and even random seed used for model initialization (as shown in Garnerin et al. [17]).8 We also \ufb01nd important to highlight a possible limitation in our investigation regarding speaker diversity in the French test set of mTEDx (only 10 speakers). In future work we intend to extend our ASR experiments using a richer variability of speakers in the test set. Finally, this investigation focused on wav2vec 2.0 architecture; our results are thus limited to a single SSL model and should be interpreted accordingly. Concluding, investigating gender bias in pre-training, \ufb01netuning, and inference for a speech-to-text pipeline is complex, and all these steps need to be carefully controlled. In this work we focused on the impact of the pre-training step. In the setting where a pre-trained model is used as a feature extractor, we observed the same trend for two downstream tasks (hybrid ASR and ST): the impact of pre-training seems to be less important than other factors. However, in the setting where the pre-trained model is used to initialize a speech encoder, pre-training on a biased speech corpus may hurt the performance. This illustrates the non trivial interaction between pre-training and \ufb01ne-tuning processes. We believe that careful investigation of the layerwise representations produced by these SSL models might help us better understand these aspects. 7. Acknowledgements This work used HPC resources from GENCI-IDRIS (2020A0111012991, 2021-AD011013317 and 2021-AD011013331). It was also funded by the European Commission through the SELMA project under grant number 957017. 8Due to the total number of models already trained for this study, the analysis of model stability using multiple runs was left for future work. \f8." + }, + { + "url": "http://arxiv.org/abs/2201.05051v3", + "title": "Speech Resources in the Tamasheq Language", + "abstract": "In this paper we present two datasets for Tamasheq, a developing language\nmainly spoken in Mali and Niger. These two datasets were made available for the\nIWSLT 2022 low-resource speech translation track, and they consist of\ncollections of radio recordings from daily broadcast news in Niger (Studio\nKalangou) and Mali (Studio Tamani). We share (i) a massive amount of unlabeled\naudio data (671 hours) in five languages: French from Niger, Fulfulde, Hausa,\nTamasheq and Zarma, and (ii) a smaller 17 hours parallel corpus of audio\nrecordings in Tamasheq, with utterance-level translations in the French\nlanguage. All this data is shared under the Creative Commons BY-NC-ND 3.0\nlicense. We hope these resources will inspire the speech community to develop\nand benchmark models using the Tamasheq language.", + "authors": "Marcely Zanon Boito, Fethi Bougares, Florentin Barbier, Souhir Gahbiche, Lo\u00efc Barrault, Mickael Rouvier, Yannick Est\u00e8ve", + "published": "2022-01-13", + "updated": "2022-04-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction The vast majority of speech pipelines are developed for and in high-resource languages, a small percentage of languages for which there is a large amount of annotated data freely available (Joshi et al., 2020). This not only limits the investigation of language impact in current pipelines, as the applied languages are usually from the same subset, but it also fails to re\ufb02ect the realworld performance these approaches will have in diverse and smaller datasets. In recent years, the IWSLT evaluation campaign1 introduced a low-resource speech translation track focused on developing and benchmarking translation tools for under-resourced languages. While for a vast majority of these languages, there is not enough parallel data for training large translation models, in these cases we might still have access to limited disparate resources, such as word-level translations, small parallel textual data, monolingual texts and recordings. This track of the IWSLT campaign thus focuses on leveraging these different kinds of data for building effective translation systems under realistic settings. In this paper we present the resources in the Tamasheq language we share in the context of the IWSLT 2022: low-resource speech translation track. Tamasheq is a variety of Tuareg, a Berber macro-language spoken by nomadic tribes across North Africa in Algeria, Mali, Niger and Burkina Faso (Heath, 2006). It accounts for approximately 500,000 native speakers, being mostly spoken in Mali and Niger (Ethnologue: Languages of the World, 2021). We share a large audio corpus, made of 224 hours of Tamasheq, together with 417 hours in other four languages of Niger (French, Fulfulde, Hausa and Zarma). We also share a smaller corpus of 17 hours 1https://iwslt.org/2022/low-resource of Tamasheq utterances aligned with French translations. We hope that these resources will represent an interesting use-case for the speech community, allowing them to not only develop low-resource speech systems in Tamasheq, but also to investigate the leveraging of unannotated audio data in diverse languages that coexist in the same geographic region. This paper is organized as follows. Section 2 presents the source content of the data shared: thanks to the Fondation Hirondelle Initiative and local partners, we are able to collect broadcast news in diverse African languages. Section 3 presents the small Tamasheq-French parallel corpus, and Section 4 presents the collection of unannotated audio data in French from Niger, Fulfulde, Hausa, Tamasheq and Zarma. Finally, Section 5 presents a speech translation baseline model for the IWSLT 2022 campaign, and Section 6 concludes this work. 2. The source content: The Fondation Hirondelle Initiative The Fondation Hirondelle2 is a Swiss non-pro\ufb01t organization founded in 1995 by journalists, with the goal of supporting local independent media in areas of social unrest. They produce and broadcast information and talk shows in different countries, providing local partners with editorial, managerial and structural support and training to function in a sustainable manner. In this work we focus on their daily radio broadcasts episodes, produced and broadcast by local partners in different languages. These allow the local communities to get informed in their own dialects, in contrast to mainstream media that tends to cover only the countries\u2019 of\ufb01cial languages. For the Tamasheq lan2https://www.hirondelle.org/en/ \fguage, we \ufb01nd these episodes being produced daily in Mali (Studio Tamani3) and Niger (Studio Kalangou4). Speech Style and Quality. The radio episodes are recorded in local studios: for each episode, one or two hosts present the news, and often interviews and advertisements are included. Most of the speech is of good quality, with rare instances of background music during advertisements. For interviews, we notice some cases of overlapping speech, mainly when simultaneous translation is performed, and background noise such as outdoor sounds. Audio Web-crawling. With the authorization of the Fondation Hirondelle and partners, we downloaded the .mp3 episodes by generating URLs from the local partners\u2019 broadcast webpages.5 The corpora presented in Section 3 and Section 4 use these audio \ufb01les as source content. 3. The Tamasheq-French Parallel Corpus This corpus corresponds to 17 hours of controlled speech utterances, with manual translations to the French language. We also share a longer version of this corpus, including 2 additional hours of potentially noisy speech segments. We detail below the steps for creating this corpus, and present general statistics. 1. Data Downloading. 100 episodes were downloaded from the Studio Kalangou website in February 2019: 23 episodes from 2016, 36 episodes from 2017 and 2018, and 5 episodes from 2019. This results in 25 hours of raw speech, with an average episode duration of 15 minutes. 2. Translation Process. We commissioned ELDA (Evaluations and Language resources Distribution Agency)6 for translating the 25 hours of Tamasheq into French text. No transcriptions were commissioned. The translations were produced by at least two native Tamasheq speakers,7 with posterior text correction by pro\ufb01cient French speakers. The translators had access to 5 pages of guidelines, including segmentation guidelines for slicing the episodes into utterances. Annotation used the Transcriber open-source tool.8 Lastly, some utterances contain gender annotation and speaker identi\ufb01cation. Unfortunately, this annotation was not standardized across the different translators, and therefore some speakers are referred by different speaker ids in different \ufb01les, with dif\ufb01cult disambiguation. We thus caution to the use of this information, as the current speaker ids 3https://www.studiotamani.org/ 4https://www.studiokalangou.org/ 5http://.org/journaux/ 6http://www.elra.info/en/about/elda/ 7The number and identity of the translators was not disclosed to us. 8http://trans.sourceforge.net/ might represent an upper-bound over the real number of speakers in this dataset. 3. Translation Post-processing. From the original Transcriber annotation \ufb01les, we \ufb01ltered out segments corresponding to pauses, noise and music, and removed segments \ufb02agged by the annotators as corresponding to foreign languages, such as Arabic and French. We then applied sacremoses, the python port of the Moses toolkit (Koehn et al., 2007), for punctuation normalization and tokenization in French. During postprocessing, we noticed that some segments (roughly two hours) were \ufb02agged as being of poor source audio quality. For these, the translation was produced nevertheless, so we decided to include them in a larger less controlled version of the shared corpus. 4. Audio Post-processing. We use the resulting collection of segments described in 3. to split the episodes into utterance-level audio \ufb01les. For posterior use in standard speech processing libraries, we also convert the original .mp3 \ufb01les into .wav, 16bits, 16KHz, single channel. We then remove all utterances shorter than 1s and longer than 30s. This is the same audio preprocessing from Baevski et al. (2020). 5. Statistics. Table 1 presents the statistics for the two versions of the Tamasheq-French parallel corpus we share with the community. The difference between clean (17 h) and full (19 h) is that the latter includes potentially noisy segments. Both are available through GitHub: https://github.com/ mzboito/IWSLT2022 Tamasheq data. Regarding the gender distribution, we notice that almost all labeled utterances correspond to male speech. We also observe that more than a half of the utterances are unlabeled (unknown). For having a better idea of the gender distribution for this dataset, we performed gender labeling using the LIUM SpkDiarization tool (Meignier and Merlin, 2010). The results should be interpreted as an estimation, but we observed that all the unlabeled utterances seemed to belong to the male category. We thus believe that this dataset is unfortunately very gender unbalanced. 4. The Niger-Mali Audio Collection This unannotated audio collection corresponds to 671 hours of episodes in \ufb01ve languages: French from Niger, Fulfulde, Hausa, Tamasheq and Zarma. We automatically segmented this audio data, generating 641 hours of content ready for deployment in speech processing pipelines. We detail below the creation of this audio collection, and present some general statistics. 1. Data Downloading. Similarly to Section 3, we downloaded 606 episodes in Tamasheq from Studio Tamani,9 and 2,160 episodes in all the avail9For Studio Tamani news are broadcast twice a day. These correspond to matin and soir segments in the source \ufb01les, respectively morning and evening shows. \fclean (17 h) full (19 h) male female unknown total male female unknown total # utterances 2,313 10 3,506 5,829 2,643 11 3,625 6,279 duration 07:37:54 0:00:48 10:04:49 17:43:33 08:49:11 0:00:51 10:28:42 19:18:45 Table 1: Statistics for the clean (left) and full (right) Tamasheq-French parallel corpus. # episodes duration # utterances duration (male) duration (female) duration (total) French 464 116:22:09 38,332 52:15:07 58:46:00 111:01:07 Fulfulde 459 114:23:40 39,255 73:31:36 35:54:47 109:26:23 Hausa 424 105:32:48 35,684 75:05:12 25:39:40 100:44:52 Tamasheq 1,014 234:36:29 75,995 134:11:32 90:33:44 224:45:16 Zarma 405 100:42:34 34,198 57:03:37 38:55:33 95:59:10 Total 2,766 671:37:43 223,464 392:07:04 249:49:44 641:56:50 Table 2: Statistics for the Niger-Mali audio collection raw content (left) and automatically segmented version (right), produced by the use of a speech segmentation system with gender labeling. able languages from Studio Kalangou: French (464), Fulfulde (459), Hausa (424), Tamasheq (408) and Zarma (405). These episodes correspond to the content we managed to retrieve with our web-crawler.10 It explored URLs ranging from November 2019 to September 2021 for Studio Kalangou, and from January 2020 to September 2021 for Studio Tamani.11 The left portion of Table 2 presents the statistics for the downloaded .mp3 \ufb01les: 671 h of audio, being 116 h in French, 114 h in Fulfulde, 105 h in Hausa, 234 h in Tamasheq, and 100 h in Zarma. This corresponds to a total of 2,766 episodes, with an average episode duration of 15 minutes for \ufb01les from Studio Kalangou, and 13 minutes for Studio Tamani. Finally, we highlight that the choice of having more audio data in Tamasheq was deliberated, since in this paper we focus on building resources for the Tamasheq language. 2. Segmenting Episodes into Breath Turns. The episodes downloaded from the websites are used as input for the LIUM SpkDiarization tool. The goal of this step is (i) to produce a format compatible with current speech processing models, that cannot deal with very long speech turns, and (ii) to remove silence, music and other non speech events. The LIUM SpkDiarization performs speech diarization, separating turns of speech. This allows us to slice the episodes into smaller audio chunks (breath turns).12 It also has the advantage of producing gender annotation, which allow us to estimate the gender distribution for each language. After applying this diarization tool, we remove the \ufb01rst 12 seconds of each episode, as these often corresponded to intro jingles. The right portion of Table 2 presents the obtained result: 641 h of audio, being 111 h of French, 109 h of Fulfulde, 100 h of Hausa, 224 h of Tamasheq, and 95 h of Zarma. There are 392 h 10Accessing and downloading date: 07/10/2021. 11Since the sites vary in their \ufb01le indexing, not all episodes from the indicated periods are successfully retrieved. 12By default, the maximum turn length is set to 20 milliseconds. estimated to be from male speakers, and 249 h from female speakers. 3. Resulting Corpus. We make both versions of this corpus (Table 2) available to the community: the 671 h corpus based on episodes, and the 641 h version based on breath turns. This is because, even though we believe our segmentation process to be of good quality, it is still supported by an automatic diarization tool. By providing the source content, we allow the community to choose their own segmentation approach. The audio collection is made available through a dedicated website: https://demo-lia.univ-avignon.fr/ studios-tamani-kalangou/. In the next section, we brie\ufb02y elaborate on the \ufb01ve languages available. 4.1. The Languages The speech resources we collect and share in this paper correspond to \ufb01ve languages spoken in Niger: French, Fulfulde, Hausa, Tamasheq and Zarma. We now provide a brief description of these languages. \u2022 French (FRA): French is the of\ufb01cial language of the Niger, and a high-resource romance language from the indo-european family. At \ufb01rst, we intended to include only the other four languages listed in this section in the audio collection. However, we noticed some french segments in the Tamasheq annotation from Section 3, and hypothesized that some lexical borrowing might happen due to the coexistence of these languages in the same region.13 \u2022 Fulfulde (FUV): Fulfulde, also known as Fula, Peul or Fulani, is a Senegambian branch of the Niger-Congo language family. Unlike most Niger-Congo languages, it does not have tones (Williamson, 1989). The number 13The same could also be true for the Arabic language, as annotators identi\ufb01ed some instances of Arabic terms in the Tamasheq speech from Section 3. \fof speakers is estimated to be above 40 million (Hammarstr\u00a8 om, 2015). The native speakers of this language, the Fula people, are one of the largest ethnic groups in the Sahel and West Africa (Hughes, 2009).14 \u2022 Hausa (HAU): Hausa is a Chadic language, member of the Afro-Asiatic language family. It is spoken mainly within the northern half of Nigeria and the southern half of Niger, with Wolff et al. (1991) and Newman (2009) estimating the number of speakers between 20 and 50 million. Early studies in Hausa showcased a remarkable number of loanwords from Arabic, Kanuri, and Tamasheq (Sch\u00a8 on, 1862). \u2022 Tamasheq (TAQ): Tamasheq is a variety of Tuareg, a Berber macro-language spoken by nomadic tribes across North Africa in Algeria, Mali, Niger and Burkina Faso (Heath, 2006). It accounts for approximately 500,000 native speakers, being mostly spoken in Mali and Niger (Hammarstr\u00a8 om, 2015). The livelihood of the Tuareg people has been under threat in the last century, due to climate change and a series of political con\ufb02icts (Decalo, 1997). This reduced considerably the number of speakers of Tamasheq however, partially due to the Malian government\u2019s active promotion of the language in recent years, Tamasheq is now classi\ufb01ed as a developing language (Hammarstr\u00a8 om, 2015). \u2022 Zarma (DJE): Zarma, also spelled Djerma, is a leading indigenous Songhay language of the southwest lobe of the west African nation of Niger, spoken by over 2 million speakers. This tonal language is also spoken in Nigeria, Burkina Faso, Mali, Sudan, Benin and Ghana (Britannica, The Editors of Encyclopedia, 2015).15 5. Use case: Speech Translation Baseline In this paper we present speech resources for the Tamasheq language, and in four other geographically close languages. They are shared in the context of the IWSLT 2022 low-resource speech translation track. In this section we present as use case our end-to-end speech translation baseline that uses the TamasheqFrench Parallel Corpus from Section 3. Dataset. We run this baseline experiment using both versions of the dataset from Section 3, with data splits detailed in Table 3. We extract 80-dimensional mel \ufb01lterbank features from the Tamasheq utterances. For the French text, we build a 1k unigram vocabulary using Sentencepiece (Kudo and Richardson, 2018) without pre-tokenization. 14Lexical resources can be found at: http://www. language-archives.org/language/ful 15Lexical resources can be found at: http://www. language-archives.org/language/dje train valid test clean (17 h) 4,444 / 13h50 581 / 1h53 804 / 1h59 full (19 h) 4,886 / 15h24 Table 3: The (Number of utterances / duration) per set. Both clean and full share the same validation and test sets. valid test clean (17 h) 2.22 (20.6/3.6/1.1/0.4) 1.80 (18.8/2.9/0.8/0.3) full (19 h) 2.31 (18.5/3.3/1.0/0.4) 1.90 (15.9/2.6/0.9/0.4) Table 4: End-to-end speech translation BLEU4 results for the baselines, with detailed scores between parentheses. Architecture. We use the fairseq s2t toolkit (Wang et al., 2020), training end-to-end speech translation Transformer models (Vaswani et al., 2017), preceded by two convolutional layers for dimensionality reduction.16 These models are trained for 500 epochs using the Adam optimizer (Kingma and Ba, 2014) with 10k warm-up steps. For decoding, we use beam search with a beam size of 5, and we evaluate the models using the best checkpoint with respect to the loss in the validation set. Results and Discussion. Table 4 presents detokenized case-sensitive BLEU scores computed using sacreBLEU (Post, 2018). Looking at these results, we notice that the full version of the dataset improves slightly over the clean version. The former contains roughly two extra hours in its training set, and thus this could hint that having more data in data scarcity scenarios is bene\ufb01cial, even when this data is of questionable quality. Nevertheless, the performance of both baselines is very low. They highlight the challenge of lowresource end-to-end speech translation when the only data used is of parallel nature. We believe results can be further improved by using auxiliary monolingual tools and models. The next paragraphs elaborate on this. For the text, and since French is a high-resource language, one could incorporate pre-trained embeddings to the translation decoder. For the decoding procedure, language models \u2013 such as CAMEMBERT (Martin et al., 2020) and FLAUBERT (Le et al., 2020) \u2013 can be used. Pretrained decoders like MBART (Liu et al., 2020) could also be incorporated. For the speech, the self-supervised speech representation produced by models such as HUBERT (Hsu et al., 2021) and WAV2VEC 2.0 (Baevski et al., 2020) can replace mel \ufb01lterbanks features for the speech translation encoder. One can use freely available pretrained models in high-resource languages, or train these models from 16Settings are detailed at their s2t transformer xs recipe. \fscratch. For the latter option, the resources from Section 4 can be used. In both cases, self-supervised (also called task-agnostic) \ufb01ne-tuning in the target language can increase results, but the best option seems to be to \ufb01ne-tune in the target task directly (Evain et al., 2021; Babu et al., 2021). Lastly, an interesting research direction is the leveraging of multilingual data in self-supervised models for speech. There are massive multilingual models that produce speech representations from many unrelated languages seen during training (Conneau et al., 2020; Babu et al., 2021). However, we currently do not know if these models are in fact better than having dedicated models trained with a smaller set of languages that are closely related (i.e. in speech style, geography, phonology, linguistic family). Thus, it might be interesting to compare the speech representations produced by a multilingual model based on the languages from Section 4, against current multilingual baselines, such as XLSR-53 (Conneau et al., 2020) and XLSR (Babu et al., 2021). 6." + }, + { + "url": "http://arxiv.org/abs/2106.04298v2", + "title": "Unsupervised Word Segmentation from Discrete Speech Units in Low-Resource Settings", + "abstract": "Documenting languages helps to prevent the extinction of endangered dialects,\nmany of which are otherwise expected to disappear by the end of the century.\nWhen documenting oral languages, unsupervised word segmentation (UWS) from\nspeech is a useful, yet challenging, task. It consists in producing time-stamps\nfor slicing utterances into smaller segments corresponding to words, being\nperformed from phonetic transcriptions, or in the absence of these, from the\noutput of unsupervised speech discretization models. These discretization\nmodels are trained using raw speech only, producing discrete speech units that\ncan be applied for downstream (text-based) tasks. In this paper we compare five\nof these models: three Bayesian and two neural approaches, with regards to the\nexploitability of the produced units for UWS. For the UWS task, we experiment\nwith two models, using as our target language the Mboshi (Bantu C25), an\nunwritten language from Congo-Brazzaville. Additionally, we report results for\nFinnish, Hungarian, Romanian and Russian in equally low-resource settings,\nusing only 4 hours of speech. Our results suggest that neural models for speech\ndiscretization are difficult to exploit in our setting, and that it might be\nnecessary to adapt them to limit sequence length. We obtain our best UWS\nresults by using Bayesian models that produce high quality, yet compressed,\ndiscrete representations of the input speech signal.", + "authors": "Marcely Zanon Boito, Bolaji Yusuf, Lucas Ondel, Aline Villavicencio, Laurent Besacier", + "published": "2021-06-08", + "updated": "2022-05-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "main_content": "Introduction Popular models for speech processing still rely on the availability of considerable amounts of speech data and their transcriptions, which reduces model applicability to a limited subset of languages considered highresource. This excludes a considerable number of lowresource languages, including many from oral tradition. Besides, learning supervised representations from speech differs from the unsupervised way infants learn language, hinting that it should be possible to develop more data-ef\ufb01cient speech processing models. Recent efforts for zero-resource processing (Glass, 2012; Jansen et al., 2013; Versteegh et al., 2016; Dunbar et al., 2017; Dunbar et al., 2019; Dunbar et al., 2020) focus on building speech systems using limited amounts of data (hence zero resource), and without textual or linguistic resources, for increasingly challenging tasks such as acoustic or lexical unit discovery. Such zero resource approaches also stimulated interest for computational language documentation (Besacier et al., 2006; Duong et al., 2016; Godard et al., 2018; Bird, 2021) and computational language acquisition (Dupoux, 2018). In this paper we address the challenging task of unsupervised word segmentation (UWS) from speech. This task consists of outputting time-stamps delimiting stretches of speech, associated with class labels corresponding to word hypotheses, without access to any supervision. We build on the work presented in Godard et al. (2018): they proposed a cascaded model for UWS that \ufb01rst generates a discrete sequence from the speech signal using the model from Ondel et al. (2016), and then segments the discrete sequence into words using a Bayesian (Goldwater, 2007) or a neural (Boito et al., 2017) approach. Since then, much progress has been made in automatic speech discretization: ef\ufb01cient Bayesian models for acoustic unit discovery (AUD) emerged (Ondel et al., 2019; Yusuf et al., 2021), and self-supervised models based on neural networks \u2013 typically made of an auto-encoder structure with a discretization layer \u2013 were also introduced (van den Oord et al., 2017; Baevski et al., 2020a; Chorowski et al., 2019). Therefore, in this work we revise and extend Godard et al. (2018) by empirically investigating the exploitability of \ufb01ve recent approaches for speech discretization for the UWS task in a rather low-resource scenario, using approximately 4 hours of speech (roughly 5k sentences). More precisely, we train three Bayesian speech discretization models (HMM (Ondel et al., 2016), SHMM (Ondel et al., 2019) and H-SHMM (Yusuf et al., 2021)), and two neural models (VQ-VAE (van den Oord et al., 2017) and vq-wav2vec (Baevski et al., 2020a)). We extract discrete speech units from them using only 4 hours of speech, and we perform UWS from the sequences produced. Our pipeline targets the Mboshi language (Bantu C25), an unwritten language arXiv:2106.04298v2 [cs.CL] 18 May 2022 \ffrom Congo-Brazzaville. Additionally, we perform experiments in equal data settings for Finnish, Hungarian, Romanian and Russian. This allows us to assess the language-related impact in our UWS pipeline. Our experiments show that neural models for speech discretization are dif\ufb01cult to exploit for UWS, as they output very long sequences. In contrast to that, the Bayesian speech discretization approaches from Ondel et al. (2019) and Yusuf et al. (2021) are robust and generalizable, producing high quality, yet compressed, discrete speech sequences from the input utterances in all languages. We obtain our best results by using these sequences for training the neural UWS model from Boito et al. (2017). This paper is organized as follows. Section 2 presents related work, and Section 3 details the speech discretization models we experiment with. Section 4 presents our experimental setup, and Section 5 our experiments. Section 6 concludes our work. 2. Related Work The work presented here revises the UWS model from speech in low-resource settings presented in Godard et al. (2018). Boito et al. (2019) complemented that work by tackling different neural models for bilingual UWS, but they did not address the discretization portion of the pipeline, working directly from manual phonetic transcriptions. In Kamper and van Niekerk (2021), the authors propose constraining the VQ-VAE model in order to generate a more exploitable output representation for direct application to the UWS task in English. Different from that, in this work we focus on providing an empirical comparison of recent discretization approaches, extending Godard et al. (2018) and providing results in low-resource settings, and in \ufb01ve different languages. This work falls into the category of computational language documentation approaches. Recent works in this \ufb01eld include the use of aligned translation for improving transcription quality (Anastasopoulos and Chiang, 2018), and for obtaining bilingually grounded UWS (Duong et al., 2016; Boito et al., 2017). We \ufb01nd pipelines for obtaining manual (Foley et al., 2018) and automatic (Michaud et al., 2018) transcriptions, and for aligning transcription and audio (Strunk et al., 2014). Other examples are methods for low-resource segmentation (Lignos and Yang, 2010; Goldwater et al., 2009), and for lexical unit discovery without textual resources (Bartels et al., 2016). Finally, direct speechto-speech (Tjandra et al., 2019) and speech-to-text (Besacier et al., 2006; B\u00b4 erard et al., 2016) architectures could be an option for the lack of transcription, but it remains to be seen how exploitable these architectures can be in low-resource settings. Lastly, we highlight that recent models based on selfsupervised learning (Schneider et al., 2019; Baevski et al., 2019; Wang et al., 2020; Liu et al., 2020; Baevski et al., 2020b; Hsu et al., 2021) provide an interesting novel option for reducing the amount of labeled data needed in downstream tasks such as automatic speech recognition and speech translation. In this work we experiment with the vq-wav2vec model, a predecessor of the popular wav2vec 2.0 (Baevski et al., 2020b). We however, do not extend our investigation to the latter, or to models such as HuBERT (Hsu et al., 2021). This is because, while these models do produce a certain discretization of the speech (for wav2vec 2.0 via quantization module, for HuBERT via clustering of MFCC features), we judge this discretization to be insuf\ufb01ciently exploitable for downstream text-based approaches due to their excessive length.1 We do, however, \ufb01nd promising the integration of self-supervised speech features into Bayesian AUD models as in Ondel et al. (2022). 3. Unsupervised Speech Discretization Models Speech discretization consists in labeling the speech signal into discrete speech units, which can correspond or not to the language phonetic inventory. This problem can be formulated as the learning of a set of U discrete units with embeddings H = {\u03b71, . . . , \u03b7U} from a sequence of untranscribed acoustic features X = [x1, . . . , xN], as well as the assignment of frame to unit z = [z1, . . . , zN]. Depending on the approach, neural (Section 3.1) or Bayesian (Section 3.2), the assumptions and the inference regarding these three quantities will differ. 3.1. Neural (VQ-based) models VQ-VAE. It comprises an encoder, a decoder, and a set of unit-speci\ufb01c embeddings H. The encoder is a neural network that transforms the data into a continuous latent representation V = (v1, . . . , vN). Each frame is then assigned to the closest embedding in the Euclidean sense (Equation 1). The decoder transforms the sequence of quantized vectors into parameters of the conditional log-likelihood of the data p(xn|z), and the network is trained to maximize this likelihood. Since the quantization step is not differentiable, the encoder is trained with a straight through estimator (Bengio et al., 2013). In addition, a pair of \u21132 losses are used to minimize the quantization error, and the overall objective function that is maximized is presented in Equation 2, where sg[\u00b7] is the stop-gradient operator. We de\ufb01ne the likelihood p(xn|zn) = N(xn; \u00b5(\u03b7zn), I). Under this assumption, the log-likelihood reduces to the mean-squared error ||xn \u2212\u00b5(\u03b7zn)||2 2. zn = arg min u ||vn \u2212\u03b7u||2. (1) 1For instance, wav2vec 2.0 trains on a joint diversity loss for inciting the use of its discrete units. Their large codebook of G = 8; V = 8 results in an upper-bound of 88 units. \fL = 1 N N X n=1 \u0010 ln p(xn|zn) \u2212k1|| sg[\u03b7zn] \u2212vn||2 2 \u2212k2||\u03b7zn \u2212sg[vn]||2 2 \u0011 , (2) vq-wav2vec. This model is composed of an encoder (f : X \u2212 \u2192Z), a quantizer (q : Z \u2212 \u2192\u02c6 Z) and an aggregator (g : \u02c6 Z \u2212 \u2192C). The encoder is a CNN which maps the raw speech input X into the dense feature representation Z. From this representation, the quantizer produces discrete labels \u02c6 Z from a \ufb01xed-size codebook e \u2208RV \u00d7d with V representations of size d. Since replacing an encoder feature vector zi by a single entry in the codebook makes the method prone to model collapse, the authors independently quantize partitions of each feature vector by creating multiple groups G, arranging the feature vector into a matrix z\u2032 \u2208RG\u00d7(d/G). Considering each row as an integer index, the full feature vector is represented by the indices i \u2208[V ]G, with V being the possible number of variables for a given group, and each element ij corresponding to a \ufb01xed codebook vector (j \u2208|G|). For each of the G groups, the quantization is performed by using Gumbel-Softmax (Jang et al., 2017) or online k-means clustering. Finally, the aggregator combines multiple quantized feature vector time-steps into a new representation ci for each time step i. The model is trained to distinguish a sample k steps in the future \u02c6 zi+k from distractor samples \u02dc z drawn from a distribution pn. This is done by minimizing the contrastive loss for steps k = {1, . . . , K} as in Equation 3, where T is the sequence length, \u03c3(x) = 1/(1 + exp(\u2212x)), \u03c3(\u02c6 z\u22ba i+khk(ci)) is the probability of \u02c6 zi+k being the true sample, and hk(ci) is the step-speci\ufb01c af\ufb01ne transformation hk(ci) = Wkci + bk. Finally, this loss is accumulated over all k steps L = PK k=1 Lk. Lk = T \u2212k X i=1 \u0010 log \u03c3(\u02c6 z\u22ba i+khk(ci)) + \u03bbE\u02dc z\u223cpn[log \u03c3(\u2212\u02dc z\u22bahk(ci))] \u0011 (3) Training. For VQ-VAE, the encoder has 4 Bi-LSTM layers each with output dimension 128 followed by a 16-dimensional feed-forward decoder with one hidden layer. The number of discovered units (quantization centroids) is set to 50. This setting is unusually low but it helps to reduce the length of the output sequence. We set k1 = 2 and k2 = 4 (Equation 2), and train2 with Adam (Kingma and Ba, 2015) with an initial learning rate of 2 \u00d7 10\u22123 which is halved whenever the loss stagnates for two training epochs. For vq-wav2vec, we use the small model from (Baevski et al., 2020a),3 but with only 64 channels, 2Implementation available at: https://github. com/BUTSpeechFIT/vq-aud 3Implementation available at: https:// github.com/pytorch/fairseq/tree/master/ examples/wav2vec residual scale of 0.2, and warm-up of 10k. For vocabulary we set G = 2 and experimented with having both V = 4, resulting in 16 units (VQ-W2V-V16), and V = 6, resulting in 36 units (VQ-W2V-V36). Larger vocabularies resulted in excessively long sequences which could not be used for UWS.4 We also experimented reducing the representation by using byte pair encoding (BPE) (Sennrich et al., 2016), hypothesizing that phones were being modeled by a combination of different units. In this setting, BPE serves as a method for identifying and clustering these patterns. Surprisingly, we found that using BPE resulted in a decrease in UWS performance. This hints that this model might not be very consistent during its labeling process. 3.2. Bayesian Generative Models For generative models, each acoustic unit embedding \u03b7i represents the parameters of a probability distribution p(xn|\u03b7zn, zn) with latent variables z. Discovering the units amounts to estimating the posterior distribution over the embeddings H and the assignment variables z given by: p(z, H|X) \u221dp(X|z, H)p(z|H) U Y u=1 p(\u03b7u). (4) From this, we describe three different approaches. HMM. In this model each unit is a 3-state left-toright HMM with parameters \u03b7i. Altogether, the set of units forms a large HMM analog to a \u201cphone-loop\u201d recognition model. This model, described in Ondel et al. (2016), serves as the backbone for the two subsequent models. SHMM. The prior p(\u03b7) in Equation 4 is the probability that a sound, represented by an HMM with parameters \u03b7, is an acoustic unit. For the former model, it is de\ufb01ned as a combination of exponential family distributions forming a prior conjugate to the likelihood. While mathematically convenient, this prior does not incorporate any knowledge about phones, i.e. it considers all possible sounds as potential acoustic units. In Ondel et al. (2019), they propose to remedy this shortcoming by de\ufb01ning the parameters of each unit u as in Equation 5, where eu is a low-dimensional unit embedding, W and b are the parameters of the phonetic subspace, and the function f(\u00b7) ensures that the resulting vector \u03b7u dwells in the HMM parameter space. The subspace, de\ufb01ned by W and b, is estimated from several labeled source languages. The prior p(\u03b7) is de\ufb01ned over the low-dimensional embeddings p(e) rather than \u03b7 directly, therefore constraining the search of units in the relevant region of the parameter space. This model is denoted as the Subspace HMM (SHMM). \u03b7u = f(W \u00b7 eu + b) (5) 4For instance, the dpseg original implementation only processes sequences shorter than 350 tokens. \fH-SHMM. While the SHMM signi\ufb01cantly improves results over the HMM, it also suffers from an unrealistic assumption: it assumes that the phonetic subspace is the same for all languages. Yusuf et al. (2021) relax this assumption by proposing to adapt the subspace for each target language while learning the acoustic units. Formally, for a given language \u03bb, the subspace and the acoustic units\u2019 parameters are constructed as in Equation 6-8, where the matrices M0, . . . , MK and vectors m0, . . . , mK represent some \u201ctemplate\u201d phonetic subspace linearly combined by a language embedding \u03b1\u03bb = [\u03b1\u03bb 1, \u03b1\u03bb 2, . . . , \u03b1\u03bb K]\u22a4. The matrices Mi and the vectors mi are estimated from labeled languages \u2013 from multilingual transcribed speech dataset for instance. The acoustic units\u2019 low-dimensional embeddings {ei} and the language embedding \u03b1 are learned on the target (unlabeled) speech data. We refer to this model as the Hierarchical SHMM (H-SHMM). W\u03bb = M0 + K X k=1 \u03b1\u03bb kMk (6) b\u03bb = m0 + K X k=1 \u03b1\u03bb kmk (7) \u03b7\u03bb,u = f(W\u03bb \u00b7 e\u03bb,u + b\u03bb) (8) Inference. For the three generative models, the posterior distribution is intractable and cannot be estimated. Instead, one seeks an approximate posterior q({\u03b7i}, z) = q({\u03b7i})q(z) that maximizes the variational lower-bound L[q]. Concerning the estimation of q(z), the expectation step is identical for all models and is achieved with a modi\ufb01ed forward-backward algorithm described in Ondel et al. (2016). Estimation of q(\u03b7), the maximization step, is model-speci\ufb01c and is described in Ondel et al. (2016) for the HMM, in Ondel et al. (2019) for SHMM models, and in Yusuf et al. (2021) for the H-SHMM model. Finally, the output of each system is obtained from a modi\ufb01ed Viterbi algorithm that uses the expectation of the log-likelihoods with respect to q({\u03b7i}), instead of point estimates. Training. The models are trained with 4 Gaussians per HMM state and using 100 for the Dirichlet process\u2019 truncation parameter. SHMM and H-SHMM use an embedding size of 100, and H-SHMM models have a 6-dimensional language embedding. For the methods that use subspaces estimation (SHMM and H-SHMM), this estimation uses the following languages: French, German, Spanish, Polish from the Globalphone corpus (Schultz et al., 2013), as well as Amharic (Abate et al., 2005), Swahili (Gelas et al., 2012) and Wolof (Gauthier et al., 2016) from the ALFFA project (Besacier et al., 2015). We use 2-3 hours subsets of each, for a total of roughly 19 hours. 4. Experimental Setup From the discrete speech units produced by the presented speech discretization models, we produce segmentation in the symbolic domain by using two UWS #Types #Tokens Avg Token Length Avg #Tokens per Sentence MB-FR MB* 6,633 30,556 4.2 6.0 FR 5,162 42,715 4.4 8.3 MaSS FI* 12,088 70,226 6.0 13.2 HU* 12,993 69,755 5.9 13.1 RO* 6,795 84,613 4.5 15.9 RU* 10,624 67,176 6.2 12.6 FR 7,226 94,527 4.1 17.8 Table 1: Statistics for the datasets, computed over the text (FR), or over the phonetic representation (*). HMM SHMM H-SHMM RAW # Units 77 (+9) 76 (+8) 49 (-19) Avg #Units per sequence 27.5 (+8.7) 24.0 (+5.2) 21.7 (+2.9) Max Length 68 (+17) 69 (+18) 63 (+12) +SIL # Units 75 (+7) 75 (+7) 47 (-21) Avg #units per sequence 20.9 (+2.1) 19.9 (+1.1) 19.4 (+0.6) Max Length 69 (+18) 62 (+11) 60 (+9) VQ-VAE VQ-W2V-16 VQ-W2V-36 RAW # Units 50 (-18) 16 (-52) 36 (-32) Avg #units per sequence 65.2 (+46.4) 81.7 (+62.9) 111.0 (+92.2) Max Length 217 (+166) 289 (+238) 361 (+310) +SIL # Units 50 (-18) 16 (-52) 36 (-32) Avg #units per sequence 43.4 (+24.6) 52.6 (+33.8) 76.2 (+57.4) Max Length 143 (+92) 229 (+178) 271 (+220) Table 2: Statistics for the discrete speech units produced for the Mboshi, with the difference between the produced and reference representation between parentheses. RAW is the original output from speech discretization models, +SIL is the result after silence postprocessing. Other languages follow the same trend. models. A \ufb01nal speech segmentation is then inferred using the units\u2019 time-stamps and evaluated by using the Zero-Resource Challenge 2017 evaluation suite, track 2 (Dunbar et al., 2017)5. We now detail the UWS models used in this work, which are trained with the same parameters from Godard et al. (2018). We also detail the datasets and the post-processing for the discrete speech discrete units. Bayesian UWS approach (monolingual). Nonparametric Bayesian models (Goldwater, 2007; Johnson and Goldwater, 2009) are statistical approaches for UWS and morphological analysis, known to be robust in low-resource settings (Godard et al., 2016). In these models, words are generated by a unigram or bigram model over an in\ufb01nite inventory, through the use of a Dirichlet process. In this work, we use the unigram model from dpseg (Goldwater et al., 2009)6, which was shown to be superior to the bigram model in lowresource settings (Godard, 2019). Neural UWS approach (bilingual). We follow the bilingual pipeline from Godard et al. (2018). The discrete speech units and their sentence-level translations are fed to an attention-based neural machine transla5Resources are available at http://zerospeech. com/2017 6Implementation available at http://homepages. inf.ed.ac.uk/sgwater/resources.html \fFigure 1: Heatmaps for the soft-alignment probability matrices generated by the neural UWS models (bilingual) trained on different discrete speech units, for the same French-Mboshi sentence. The darker the square, the higher the pair probability. The rows present the automatically generated units from the different discretization models, informed in the bottom. tion system that produces soft-alignment probability matrices between source and target sequences. For each sentence pair, its matrix is used for clustering together (segmenting) neighboring phones whose alignment distribution peaks at the same source word. Examples of these matrices are provided in Figure 1. We refer to this model as neural. Datasets. We use the Mboshi-French parallel corpus (MB-FR) (Godard et al., 2018), which is a 5,130 sentence corpus from the language documentation process of Mboshi (Bantu C25), an oral language spoken in Congo-Brazzaville. We also report results using an extract from the MaSS corpus (Boito et al., 2020), a multilingual speech-to-speech and speech-totext dataset. We use the down-sampling from Boito et al. (2020), which results in 5,324 aligned sentences. We exclude French and Spanish, as these languages are present in the subspace prior from SHMM and HSHMM models, and we exclude English as it was used as to tune the hyperparameters of the subspace models and the VQ-VAE. We also exclude Basque, as the sequences produced were too long for UWS training. The \ufb01nal set of languages is: Finnish (FI), Hungarian (HU), Romanian (RO) and Russian (RU). In all cases, the French (FR) translations are used as supervision for the neural UWS approach. Statistics are presented in Table 1. Discrete Speech Units Post-processing. We experiment with reducing the representation by removing units predicted in silence windows. For this, we use the gold references\u2019 silence annotations. Removing these allow us to focus the investigation on the quality of the units generated in relevant portions of the speech. We see in Table 2 that removing windows that we know correspond to silence considerably reduces the number of units generated by all models. Before UWS evaluation, the silence windows are reintroduced to ensure that their segmentation boundaries are taken into dpseg neural RAW +SIL RAW +SIL 1 HMM 32.4 59.9 35.1 61.2 2 SHMM 43.7 61.4 41.4 64.7 3 H-SHMM 45.3 61.4 44.8 63.9 4 VQ-VAE 39.0 52.7 32.1 60.1 5 VQ-W2V-V16 37.4 52.2 32.0 50.6 6 VQ-W2V-V36 48.0 49.8 7 True Phones 77.1 74.5 Table 3: UWS Boundary F-scores for the MB-FR dataset. account. This approach is justi\ufb01ed because a silence detector is an inexpensive resource to obtain. For instance, popular software such as Praat (Boersma, 2006) are able to handle this task in any language. Figure 2 exempli\ufb01es the discrete speech units discovered by the models before applying this post-processing. 5. Experiments We \ufb01rst present our results for the MB-FR dataset, the language which corresponds to the true low-resource scenario that we are interested in. Table 3 presents UWS Boundary F-scores for UWS models (dpseg and neural) trained using different discrete speech units for the MB-FR dataset. We include results for both the direct output (RAW) and the post-processed version (+SIL). The RAW VQ-W2V-V36 is not included as its output sequences were excessively large for training our UWS models (Table 2). We observe that in all cases, post-processing the discrete speech units with the silence information (+SIL) creates easier representations for the UWS task. We believe this is due to the considerable reduction in average length of the sequences (Table 2). For Bayesian models, we also observe a reduction in the number of units, meaning that some units were modelling silence windows, even though these models already produce an independent token for silence, which we remove before UWS training. Looking at the results for UWS models trained using the output of VQ-based models (rows 4-6), we see that the best segmentation result is achieved using the one with the smallest average sequence length (VQ-VAE). In general, we believe that all VQ-based models underperform due to the excessively long sequences produced, which are challenging for UWS. Figure 2 illustrates this difference in representation length, by presenting the discrete speech units produced by Bayesian and neural models for a given utterance: the latter produce considerably more units. Overall, we \ufb01nd that UWS models trained using the discrete speech units from Bayesian models produce better segmentation, with models trained with SHMM and H-SHMM presenting the best results. In Yusuf et al. (2021) both systems showed competitive results for the AUD task. A noticeable difference between these two models is the compression level: H-SHMM \f(a) HMM (b) SHMM (c) H-SHMM (d) VQ-VAE (e) VQ-W2V-V16 (f) VQ-W2V-V36 Figure 2: Speech discrete units produced by the \ufb01ve models for the same Mboshi sentence. Black lines denote the true boundaries, while dashed white lines denote the discovered units boundaries. For each example, discrete speech units (top) and reference (bottom). dpseg neural FI HU RO RU FI HU RO RU HMM 45.6 49.9 53.5 47.1 53.4 51.2 56.6 54.9 SHMM 49.0 52.3 53.5 50.5 56.0 53.9 57.7 57.7 H-SHMM 50.5 52.9 58.0 52.9 56.1 53.3 59.6 56.0 True Phones 87.1 83.3 88.0 85.9 68.4 63.4 75.7 68.4 Table 4: UWS Boundary F-scores for the MaSS dataset using Bayesian models (+SIL only). Best UWS results from speech discrete units (bold) and from true phones (underlined) are highlighted. uses 27 fewer units than SHMM. Regarding type retrieval, the models scored 12.1% (SHMM), 10.7% (HSHMM), and 31% (topline). We also \ufb01nd that SHMM models produced more types and fewer tokens, reaching a higher Type-Token Ratio (0.63) compared to HSHMM (0.55). Focusing on the generalization of the presented speech discretization models, we trained our models using four languages from the MaSS dataset. We observed that due to the considerably larger average length of the sentences (Table 1), the VQ-based models produced sequences which we were unable to directly apply to UWS training. This again highlights that these models need some constraining, or post-processing, in order to be directly exploitable for UWS. Focusing on the Bayesian models, which performed the best for generating exploitable discrete speech units for UWS in low-resource settings, Table 4 present UWS results. We omit results for RAW, as we observe the same trend from Table 3. Looking at the results for the four languages, we again observe competitive results for SHMM and H-SHMM models, illustrating that these approaches generalize well to different languages. Comparing the UWS results present in Table 3 (Mboshi) and Table 4 (languages from MaSS), we notice overall lower results for the languages from the MaSS dataset (best result: 59.6) compared to Mboshi (best result: 64.7). We believe this is due to the MaSS data coming from read text, in which the utterances correspond to verses that are consistently longer than sentences (Table 1). This results in a more challenging setting for UWS and explains the lower results. Lastly, our results over \ufb01ve languages show that the neural UWS model produces better segmentation results from discrete speech units than dpseg, which in turn performs the best with the true phones (topline). This con\ufb01rms the trend observed by (Godard et al., 2018). The neural UWS models have the advantage of their word-level aligned translations for grounding the segmentation process, which might be attenuating the dif\ufb01culty of the task in this noisier scenario, with longer sequences and more units. Moreover, a bene\ufb01t of these models is the potentially exploitable bilingual alignment discovered during training. Boito et al. (2019) used these alignments for \ufb01ltering the generated vocabulary, increasing type retrieval. 6." + }, + { + "url": "http://arxiv.org/abs/2003.13325v1", + "title": "Investigating Language Impact in Bilingual Approaches for Computational Language Documentation", + "abstract": "For endangered languages, data collection campaigns have to accommodate the\nchallenge that many of them are from oral tradition, and producing\ntranscriptions is costly. Therefore, it is fundamental to translate them into a\nwidely spoken language to ensure interpretability of the recordings. In this\npaper we investigate how the choice of translation language affects the\nposterior documentation work and potential automatic approaches which will work\non top of the produced bilingual corpus. For answering this question, we use\nthe MaSS multilingual speech corpus (Boito et al., 2020) for creating 56\nbilingual pairs that we apply to the task of low-resource unsupervised word\nsegmentation and alignment. Our results highlight that the choice of language\nfor translation influences the word segmentation performance, and that\ndifferent lexicons are learned by using different aligned translations. Lastly,\nthis paper proposes a hybrid approach for bilingual word segmentation,\ncombining boundary clues extracted from a non-parametric Bayesian model\n(Goldwater et al., 2009a) with the attentional word segmentation neural model\nfrom Godard et al. (2018). Our results suggest that incorporating these clues\ninto the neural models' input representation increases their translation and\nalignment quality, specially for challenging language pairs.", + "authors": "Marcely Zanon Boito, Aline Villavicencio, Laurent Besacier", + "published": "2020-03-30", + "updated": "2020-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Computational Language Documentation (CLD) is an emerging research \ufb01eld whose focus lies on helping to automate the manual steps performed by linguists during language documentation. The need for this support is ever more crucial given predictions that more than 50% of all currently spoken languages will vanish before 2100 (Austin and Sallabank, 2011). For these very low-resource scenarios, transcription is very time-consuming: one minute of audio is estimated to take one hour and a half on average of a linguist\u2019s work (Austin and Sallabank, 2013). This transcription bottleneck problem (Brinckmann, 2009), combined with a lack of human resources and time for documenting all these endangered languages, can be attenuated by translating into a widely spoken language to ensure subsequent interpretability of the collected recordings. Such parallel corpora have been recently created by aligning the collected audio with translations in a well-resourced language (Adda et al., 2016; Godard et al., 2017; Boito et al., 2018), and some linguists even suggested that more than one translation should be collected to capture deeper layers of meaning (Evans and Sasse, 2004). However, in this documentation scenario, the impact of the language chosen for translation rests understudied, and it is unclear if similarities among languages have a signi\ufb01cant impact in the automatic bilingual methods used for information extraction (these include word segmentation, word alignment, and translation). Recent work in CLD includes the use of aligned translation for improving transcription quality (Anastasopoulos and Chiang, 2018), and for obtaining bilingual-rooted word segmentation (Duong et al., 2016; Boito et al., 2017). There are pipelines for obtaining manual (Foley et al., 2018) and automatic (Michaud et al., 2018) transcriptions, and for aligning transcription and audio (Strunk et al., 2014). Other examples are methods for low-resource segmentation (Lignos and Yang, 2010; Goldwater et al., 2009b), and for lexical unit discovery without textual resources (Bartels et al., 2016). Moreover, direct speech-to-speech (Tjandra et al., 2019) and speech-to-text (Besacier et al., 2006; B\u00b4 erard et al., 2016) architectures could be an option for the lack of transcription, but there is no investigation yet about how exploitable these architectures can be in low-resource settings. Finally, previous work also showed that Neural Machine Translation models at the textual level are able to provide exploitable soft-alignments between sentences by using only 5,130 training examples (Boito et al., 2019). In this work, we investigate the existence of language impact in bilingual approaches for CLD, tackling word segmentation,1 one of the \ufb01rst tasks performed by linguists after data collection. More precisely, the task consists in detecting word boundaries in an unsegmented phoneme sequence in the language to document, supported by the translation available at the sentence-level. The phonemes in the language to document can be manually obtained, or produced automatically as in Godard et al. (2018). For our experiments, we use the eight languages from the multilingual speech-to-speech MaSS dataset (Boito et al., 2020): Basque (EU), English (EN), Finnish (FI), French (FR), Hungarian (HU), Romanian (RO), Russian (RU) and Spanish (ES). We create 56 bilingual models, seven per language, simulating the documentation of each language supported by different sentence-level aligned translations. This setup allows us to investigate how having the same content, but translated in different languages, affects bilingual word segmentation. We highlight that in 1Here, word is de\ufb01ned as a sequence of phones that build a minimal unit of meaning. arXiv:2003.13325v1 [cs.CL] 30 Mar 2020 \fthis work we use a dataset of well-resourced languages due to the lack of multilingual resources in documentation languages that could be used to investigate this hypothesis. Thus, for keeping results coherent and generalizable for CLD, we down-sample our corpus, running our experiments using only 5k aligned sentences as a way to simulate a low-resource setting. We train bilingual models based on the segmentation and alignment method from Godard et al. (2018), investigating the language-related impact in the quality of segmentation, translation and alignment. Our results con\ufb01rm that the language chosen for translation has a signi\ufb01cant impact on word segmentation performance, what aligns with Haspelmath (2011) who suggests that the notion of word cannot always be meaningfully de\ufb01ned cross-linguistically. We also verify that joint segmentation and alignment is not equally challenging across different languages: while we obtain good results for EN, the same method fails to segment the language-isolate EU. Moreover, we verify that the bilingual models trained with different aligned translations learn to focus on different structures, what suggests that having more than one translation could enrich computational approaches for language documentation. Lastly, the models\u2019 performance is improved by the introduction of a hybrid approach, which leverages the boundary clues obtained by a monolingual non-parametric Bayesian model (Goldwater et al., 2009b) into the bilingual models. This type of intermediate annotation is often produced by linguists during documentation, and its incorporation into the neural model can be seen as a form of validating word-hypotheses. This paper is organized as follows. Section 2. presents the models investigated for performing word segmentation. Section 3. presents the experimental settings, and Section 4. the results and discussion. Section 5. concludes the work. 2. Models for Word Segmentation 2.1. Monolingual Bayesian Approach Non-parametric Bayesian models (Goldwater, 2007; Johnson and Goldwater, 2009) are statistical approaches that can be used for word segmentation and morphological analysis, being known as very robust in low-resource settings (Godard et al., 2016; Goldwater et al., 2009a). In these monolingual models, words are generated by a uni or bigram model over a non-\ufb01nite inventory, through the use of a Dirichlet process. Although providing reliable segmentation in low-resource settings, these monolingual models are incapable of automatically producing alignments with a foreign language, and therefore the discovered pseudoword segments can be seen as \u201cmeaningless\u201d. Godard et al. (2018) also showed that dpseg2 (Goldwater et al., 2006; Goldwater et al., 2009a) behaves poorly on pseudophone units discovered from speech, which limits its application. Here, we investigate its use as an intermediate monolingual-rooted segmentation system, whose discovered boundaries are used as clues by bilingual models. 2Available at http://homepages.inf.ed.ac.uk/ sgwater/resources.html 2.2. Bilingual Attention-based Approach We reproduce the approach from Godard et al. (2018) who train Neural Machine Translation (NMT) models between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phoneme sequence). Due to the attention mechanism present in these networks (Bahdanau et al., 2014), posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sentences. The soft-alignment probability matrix for a given sentence pair is a collection of context vectors. Formally, a context vector for a decoder step t is computed using the set of source annotations H and the last state of the decoder network (st\u22121, the translation context). The attention is the result of the weighted sum of the source annotations H (with H = h1, ..., hA) and their \u03b1 probabilities (Eq. 1). Finally, these are obtained through a feed-forward network align, jointly trained, and followed by a softmax operation (Eq. 2). ct = Att(H, st\u22121) = A X i=1 \u03b1t ihi (1) \u03b1t i = softmax(align(hi, st\u22121)) (2) The authors show that these soft-alignment probability matrices can be used to produce segmentation over phoneme (or grapheme) sequences. This is done by segmenting neighbor phonemes whose probability distribution (over the words in the aligned source translation) peaks at different words. The result is a pair of phoneme sequences and translation words, as illustrated on the bottom half of Figure 1. In this work we refer to this type of model simply as neural model. 2.3. Bilingual Hybrid Approach The monolingual approach (\u00a72.1.) has the disadvantage of not producing bilingual alignment, but it segments better than the bilingual approach (\u00a72.2.) when the phonemic input is used (Godard et al., 2018). In this work we investigate a simple way of combining both approaches by creating a hybrid model which takes advantage of the Bayesian method\u2019s ability to correctly segment from small data while jointly producing translation alignments. We augment the original unsegmented phoneme sequence with the dpseg output boundaries. In this augmented input representation, illustrated in Figure 1, a boundary is denoted by a special token which separates the words identi\ufb01ed by dpseg. We call this soft-boundary insertion, since the dpseg boundaries inserted into the phoneme sequence can be ignored by the NMT model, and new boundaries can be inserted as well. For instance, in Figure 1 aintrat becomes a intrat (boundary insertion), and urat debine becomes uratdebine (soft-boundary removal). 3. Experimental Settings Multilingual Dataset: For our experiments we use the MaSS dataset (Boito et al., 2020), a fully aligned and multilingual dataset containing 8,130 sentences extracted \fFigure 1: An illustration of the hybrid pipeline for the EN>RO language pair. The Bayesian model receives the unsegmented phonemes, outputing segmentation. The discovered boundaries are then replaced by a special token, and bilingual re-segmentation and alignment are jointly performed. from the Bible. The dataset provides multilingual speech and text alignment between all the available languages: English (EN), Spanish (ES), Basque (EU), Finnish (FI), French (FR), Hungarian (HU), Romanian (RO), Russian (RU). As sentences in documentation settings tend to be short, we used RO as the pivot language for removing sentences longer (in terms of number of tokens) than 100 symbols. The resulting corpus contains 5,324 sentences, a size which is compatible with real language documentation scenarios. Table 1 presents some statistics. For the phonemic transcription of the speech (target side of the bilingual segmentation pipeline), we use the automatic phonemization from Maus forced aligner (Kisler et al., 2017), which results in an average vocabulary reduction of 835 types, the smallest being for RO (396), and the most expressive being for FR (1,708). This difference depends on the distance between phonemic and graphemic forms for each language. The phonemizations present an average number of unique phonemes of 42.5. Table 2 presents the statistic for the phonemic representation. Training and Evaluation: For monolingual segmentation, we use dpseg\u2019s unigram model with the same hyperparameters as Godard et al. (2016). The bilingual neural models were trained using a one-layer encoder (embeddings of 64), and a two-layers decoder (embeddings of 16). The remaining parameters come from Godard et al. (2018). From this work, we also reproduced the multiple runs averaging: for every language pair, we trained two networks, averaging the soft-alignment probability matrices produced. This averaging can be seen as agreement between the alignment learned with different parameters initialization. Regarding the data, 10% of the multilingual ids were randomly selected for validation, and the remaining were used for training. We report BLEU scores (Papineni et al., 2002) over the validation set for assessing translation quality. For hybrid setups, the soft-boundary special token is removed from the output before scoring, so results are comparable. Finally, for the reported word discovery results, the totality of the corpus is considered for evaluation. #Types #Tokens Token Length Token/ Sentence EN 5,232 90,716 3.98 17.04 ES 8,766 85,724 4.37 16.10 EU 11,048 67,012 5.91 12.59 FI 12,605 70,226 5.94 13.19 FR 7,226 94,527 4.12 17.75 HU 13,770 69,755 5.37 13.10 RO 7,191 88,512 4.06 16.63 RU 11,448 67,233 4.66 12.63 Table 1: Statistics for the textual portion of the corpus. The last two columns bring the average of the named metrics. #Types #Tokens Token Length Phonemes/ Sentence EN 4,730 90,657 3.86 56.18 ES 7,980 85,724 4.30 68.52 EU 9,880 67,012 6.94 71.13 FI 12,088 70,226 5.97 72.37 FR 5,518 93,038 3.21 52.86 HU 12,993 69,755 5.86 65.52 RO 6,795 84,613 4.50 68.04 RU 10,624 67,176 6.19 59.26 Table 2: Statistics for the phonemic portion of the corpus. The last two columns bring the average of the named metrics. 4. Bilingual Experiments Word segmentation boundary F-scores are presented in Table 3. For the bilingual methods, Table 4 presents the averaged BLEU scores. We observe that, similar to the trend observed in Table 3, hybrid models are in average superior in terms of BLEU scores.3 Moreover, we observe that segmentation and translation scores are strongly correlated for six of the eight languages, with an average \u03c1-value of 0.76 3We \ufb01nd an average BLEU scores difference between best hybrid and neural setups of 1.50 points after removing the outlier (RO). For this particular case, hybrid setups have inferior translation performance (average BLEU reduction of 11.44). \fTable 3: Word Segmentation Boundary F-score results for neural (top), hybrid (middle) and dpseg (bottom). The columns represent the target of the segmentation, while the rows represented the translation language used. For bilingual models, darker squares represent higher scores. Better visualized in color. (signi\ufb01cant to p < 0.01). The exceptions were EU (0.46) and RO (-0.06). While for EU we believe the general lack of performance of the systems could explain the results, the pro\ufb01le of RO hybrid setups was surprising. It highlights that the relationship between BLEU score and segmentation performance is not always clearly observed. In summary, we \ufb01nd that the addition of soft-boundaries will increase word segmentation results, but its impact to translation performance needs further investigation. Looking at the segmentation results, we verify that, given the same amount of data and supervision, the segmentation performance for different target languages vary: EN seems to be the easiest to segment (neural: 69.1, hybrid: 73.3), while EU is the most challenging to segment with the neural approach (neural: 38.4, hybrid: 47.3). The following subsections will explore the relationship between segmentation, alignment performance and linguistic properties. 4.1. Source Language Impact Bilingual Baseline Comparison: The results con\ufb01rm that there is an impact related to using different source languages for generating the segmentations, and we identify interesting language pairs emerging as the most ef\ufb01cient, such as FI>HU (Uralic Family), FR>RO and FR>ES (Romance family).4 In order to consolidate these results, we investigate if the language ranking obtained (in terms of best translation languages for segmenting a target language) is due to a similar pro\ufb01le of the source and target languages in terms of word length and tokens per sentence. 4We denote L1>L2 as using L1 for segmenting L2. L1<>L2 means L1>L2 and L2>L1. Table 4: BLEU 4 average results for neural (top) and hybrid (bottom) bilingual models. The columns represent the target of the segmentation. Darker squares represent higher scores. Better visualized in color. Table 5: Proportional segmentation results. The columns represent the target of the segmentation. Darker squares represent higher word boundary F-scores. Better visualized in color. Since translation words are used to cluster the phoneme sequences into words (bilingual-rooted word segmentation), having more or less translation words could be a determining aspect in the bilingual segmentation performed (more details about this in Section 4.3.). For this investigation, we use a naive bilingual baseline called proportional (Godard et al., 2018). It performs segmentation by distributing phonemes equally between the words of the aligned translation, insuring that words that have more letters, receive more phonemes (hence proportional). The average difference between the best hybrid (Table 3) and proportional (Table 5) results is of 25.92 points. This highlights not only the challenge of the task, but that the alignments learned by the bilingual models are not trivial. We compute Pearson\u2019s correlation between bilingual hybrid and proportional segmentation scores, observing that no language presents a signi\ufb01cant correlation for p < 0.01. However, when all languages pairs are considered together (N = 56), a signi\ufb01cant positive correlation (0.74) is ob\fserved. Our interpretation is that the token ratio between the number of tokens in source and the number of tokens in target sentences have a signi\ufb01cant impact on bilingual segmentation dif\ufb01culty. However, it does not, by itself, dictates the best choice of translation language for a documentation scenario. For instance, the proportional baseline results indicate that EU is the best choice for segmenting RU. This choice is not only linguistically incoherent, but bilingual models reached their worst segmentation and translation results by using this language. This highlights that while statistical features might impact greatly low-resource alignment and should be taken into account, relying only on them might result in sub-optimal models. Language Ranking: Looking into the quality of the segmentation results and their relationship with the language ranking, our intuition was that languages from the same family would perform the best. For instance, we expected ES<>FR, ES<>RO, FR<>RO (Romance family) and FI<>HU (Uralic family) to be strong language pairs. While some results con\ufb01rm this hypothesis (FR>ES, FI>HU, FR>RO), the exceptions are: EN>FR, RU<>FI and ES>EU. For EN>FR, we argue that EN was ranked high for almost all languages, which could be due to some convenient statistic features. Table 1 shows that EN presents a very reduced vocabulary in comparison to the other languages. This could result in an easier language modeling scenario, which could then re\ufb02ect in a better alignment capacity of the trained model. Moreover, for this and for RU<>FI scenarios, results seemed to reproduce the trend from the proportional baseline, in which these pairs were also found to be the best. This could be the result of a low syntactic divergence between languages of these pairs. Finally, the language isolate EU is not a good choice for segmenting any language (worst result for all languages). If we consider that this language has no relation to any other in this dataset, this result could be an indication that documentation should favor languages somehow related to the language they are trying to document. In fact, results for EU segmentation are both low (F-score and BLEU) and very close to the proportional baseline (average difference of 4.23 for neural and 13.10 for hybrid), which suggests that these models were not able to learn meaningful bilingual alignment. 4.2. Hybrid Setups Looking at the hybrid results, we verify that the these models outperform their neural counterparts. Moreover, the impact of having the soft-boundaries is larger for the languages whose bilingual segmentation seems to be more challenging, hinting that the network is learning to leverage the soft-boundaries for generating a better-quality alignment between challenging language pairs. Table 6 presents the intersection between the correct types discovered by both monolingual and hybrid models. Results show that while the monolingual baseline informs the bilingual models, it is not completely responsible for the increase in performance. This hints that giving boundary clues to the network will not simply force some pre-established segmentation, but instead it will enrich the network\u2019s internal representation. Moreover, it is interesting to observe Table 6: Intersection between the correct types discovered by both monolingual and hybrid models. We notice that the target language of the segmentation (columns) has an impact in the acceptance of soft-boundaries by the neural model. that the degree of overlap between the vocabulary generated will depend on the language target of segmentation, hinting that some languages might accept more easily the soft-boundaries proposed by the monolingual approach. Nonetheless, compared to the monolingual segmentation (Table 3), even if the hybrid approach improves over the base neural one, it deteriorates considerably the performance with respect to dpseg (average difference of 16.54 points between the best hybrid result and its equivalent monolingual segmentation). However, this deterioration is necessary in order to discover semantically meaningful structures (joint bilingual segmentation and alignment), which is a harder task than monolingual segmentation. In this scenario, the monolingual results should be interpreted as an intermediate, good quality, segmentation/word-hypotheses created by linguists, which might be validated or not in light of the system\u2019s bilingual output. 4.3. Analysis of the Discovered Vocabulary Next we study the characteristics of the vocabulary outputed by the bilingual models focusing on the impact caused by the aligned translation. For this investigation, we report results for hybrid models only, since their neural equivalents present the same trend. We refer as token the collection of phonemes segmented into word-like units. Types are de\ufb01ned as the set of distinct tokens. Table 7 brings the hybrid model\u2019s total number of types. Looking at the rows, we see that EN, ES, FR, RO, which are all fusional languages, generated in average the smallest vocabularies. We also notice that HU and FI are the languages that tend to create the largest vocabularies when used as translation language. This could be due to both languages accepting a \ufb02exible word order, thus creating a dif\ufb01cult alignment scenario for low-resource settings. Moreover, these languages, together with EU, are agglutinative languages. This might be an explanation for the lack of performance in general for setups using these languages as target. In these conditions, the network must learn to align many translation words to the same structure in order to achieve the expected segmentation. However, sometimes over-segmentation might be the result of the network favoring alignment content instead of phoneme clustering. Notwithstanding, the models for agglutinative languages \fTable 7: Number of types produced by the hybrid models. Figure 2: Average token length of the reference, monolingual dpseg, and best neural and hybrid setups from Table 3. are not the only ones over-segmenting. Looking at the average token length of the segmentations produced in Figure 2, and supported by the size of the vocabularies, we verify that bilingual approaches tend to over-segment the output independent of the language targeted. This over-segmentation tends to be more accentuated in hybrid setups, with the exception of EN, FR and RO. This is probably due to the challenge of clustering the very long sequence of phonemes into the many available source words (see statistics for words and phonemes per sentence in Tables 1 and 2). Furthermore, the very de\ufb01nition of a word might be dif\ufb01cult to de\ufb01ne cross-linguistically, as discussed by Haspelmath (2011), and different languages might encourage a more \ufb01ne-grained segmentation. For instance, in Figure 3 we see the EN alignment generated by the FR and ES neural models for the same sentence. Focusing at the do not (du:nQt) at the end of the sentence, we see that the ES model does not segment it, aligning everything to the ES translation no. Meanwhile the FR model segments the structure in order to align it to the translation ne pas. In both cases the discovered alignments are correct however, the ES segmentation is considered wrong. This highlights that the use of a segmentation task for evaluating the learned alignment might be sub-optimal, and that a more in-depth evaluation of source-to-target correspondences should be considered. In Section 4.4. we showcase a method for \ufb01ltering the alignments generated by the bilingual models. Concluding, in this work we study the alignment implicitly optimized by a neural model. An interesting direction would be the investigation of explicit alignment optimization for translation models, such as performed in Godard et al. (2019), where the authors consider the segmentation length generated by the bilingual alignments as part of their loss during training. Figure 3: EN attention matrices generated by neural FR (left) and ES (right) bilingual models. The squares represent alignment probabilities (the darker the square, the higher the probability). The EN phonemization (rows) correspond to the following sentence: \u201cBut because I tell the truth, you do not believe me\u201d. 4.4. Alignment Con\ufb01dence The neural approach used here for bilingual-rooted word segmentation produces alignments between source and target languages. In this section we investigate how these alignments vary in models trained using different translation (source) languages. This extends the results from the previous section, that showed that models trained on different languages will present different lexicon sizes. We aim to show that this difference in segmentation behavior comes from the different alignments that are discovered by the models with access to different languages. We use the approach from Boito et al. (2019) for extracting the alignments the bilingual models are more con\ufb01dent about. For performing such a task, Average Normalized Entropy, as de\ufb01ned in Boito et al. (2019), is computed for every (segmentation, aligned translation) pair. The scores are used for ranking the alignments in terms of con\ufb01dence, with low-entropy scores representing the high-con\ufb01dence automatically generated alignments. In previous work, we showed that this approach allow us to increase type retrieval scores by \ufb01ltering the good from the bad quality alignments discovered. For this investigation, we chose to present results applied to the target language FR. Table 8 presents the top 10 low-entropy (high-con\ufb01dence) pairs from 3 different translation languages (from Table 3, FR column). The phoneme sequences are accompanied by their grapheme equivalents to increase readability, but all presented results were computed over phoneme sequences. The other translation languages were also omitted for readability purpose. We observe a different set of discovered types depending on the language used, but all languages learn a fair amount \fTable 8: Top low-entropy/high-con\ufb01dence (graphemization, phonemic segmentation, aligned translation) results for EN, ES and RU models for segmenting FR. The output of the system is the phonemic segmentation, and graphemization is provided only for readability purpose. N-A-W identify unknown/incorrect generated types. of biblical names and numbers, very frequent due to the nature of the dataset.5 This highlights that very frequent types might be captured independently of the language used, but other structures might be more dependent on the chosen language. We also notice the presence of incorrect alignments (the word car (because) aligned to the word main), concatenations (the words les huissiers (the ushers) became a single word) and incorrect types (N-A-W in the table). This is to be expected, as these are automatic alignments. Con\ufb01rming the intuition that the models are focused on different information depending on the language they are trained on, we studied the vocabulary intersection of the FR bilingual models for the top 200 correct discovered types ranked by alignment con\ufb01dence. We observed that the amount of shared lexicon for the sets is fairly small: the smallest intersection being of 20% (between EU and RO) and the largest one of 35.5% (between RU and FI). In other words, this means that the high-con\ufb01dence alignments learned by distinct bilingual models differ considerably. Even for models that shared most structures, such as FI and RU (35.5%), and HU and RU (34%), this intersection is still limited. This shows that the bilingual models will discover different structures, depending on the supervision available. This is particularly interesting considering that the content of the aligned information remains the same, and the only difference between the bilingual models is the language in which the information is expressed. It highlights how collecting data in multilingual settings (that is, in more than one translation language) could enrich approaches for CLD. Lastly, we leave as future work a more generalizable study of the distinctions in the bilingual alignments, including the evaluation of the word-level alignments discovered by the models. 5." + }, + { + "url": "http://arxiv.org/abs/1910.05154v1", + "title": "How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages", + "abstract": "For language documentation initiatives, transcription is an expensive\nresource: one minute of audio is estimated to take one hour and a half on\naverage of a linguist's work (Austin and Sallabank, 2013). Recently, collecting\naligned translations in well-resourced languages became a popular solution for\nensuring posterior interpretability of the recordings (Adda et al. 2016). In\nthis paper we investigate language-related impact in automatic approaches for\ncomputational language documentation. We translate the bilingual Mboshi-French\nparallel corpus (Godard et al. 2017) into four other languages, and we perform\nbilingual-rooted unsupervised word discovery. Our results hint towards an\nimpact of the well-resourced language in the quality of the output. However, by\ncombining the information learned by different bilingual models, we are only\nable to marginally increase the quality of the segmentation.", + "authors": "Marcely Zanon Boito, Aline Villavicencio, Laurent Besacier", + "published": "2019-10-11", + "updated": "2019-10-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction The Cambridge Handbook of Endangered Languages (Austin and Sallabank, 2011) estimates that at least half of the 7,000 languages currently spoken worldwide will no longer exist by the end of this century. For these endangered languages, data collection campaigns have to accommodate the challenge that many of them are from oral tradition, and producing transcriptions is costly. This transcription bottleneck problem can be handled by translating into a widely spoken language to ensure subsequent interpretability of the collected recordings, and such parallel corpora have been recently created by aligning the collected audio with translations in a well-resourced language (Adda et al., 2016; Godard et al., 2017; Boito et al., 2018). Moreover, some linguists suggested that more than one translation should be collected to capture deeper layers of meaning (Evans and Sasse, 2004). This work is a contribution to the Computational Language Documentation (CLD) research \ufb01eld, that aims to replace part of the manual steps performed by linguists during language documentation initiatives by automatic approaches. Here we investigate the unsupervised word discovery and segmentation task, using the bilingual-rooted approach from Godard et al. (2018). There, words in the well-resourced language are aligned to unsegmented phonemes in the endangered language in order to identify group of phonemes, and to cluster them into word-like units. We experiment with the Mboshi-French parallel corpus, translating the French text into four other well-resourced languages in order to investigate language impact in this CLD approach. Our results hint that this language impact exists, and that models based on different languages will output different word-like units. 2 Methodology The Multilingual Mboshi Parallel Corpus: In this work we extend the bilingual Mboshi-French parallel corpus (Godard et al., 2017), fruit of the documentation process of Mboshi (Bantu C25), an endangered language spoken in Congo-Brazzaville. The corpus contains 5,130 utterances, for which it provides audio, transcriptions and translations in French. We translate the French into four other well-resourced languages through the use of the DeepL translator.1 The languages added to the dataset are: English, German, Portuguese and Spanish. Table 1 shows some statistics for the produced Multilingual Mboshi parallel corpus.2 Bilingual Unsupervised Word Segmentation/Discovery Approach: We use the bilingual neuralbased Unsupervised Word Segmentation (UWS) approach from Godard et al. (2018) to discover words in Mboshi. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). Due to the attention mechanism present in these networks (Bahdanau et al., 2014), posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sequences. These matrices give us sentence-level source-to-target alignment information, and by using it for clustering neighbor phonemes aligned to the same translation word, we are able to create segmentation in the target side. The product of this approach is a set of (discovered-units, translation words) pairs. 1Available at https://www.deepl.com/translator 2Available at https://github.com/mzboito/mmboshi \fTable 1: Statistics for the Multilingual Mboshi parallel corpus. The French text is used for generating translation in the four other languages present in the right side of the table. Table 2: From left to right, results for: bilingual UWS, multilingual leveraging by voting, ANE selection. Multilingual Leveraging: In this work we apply two simple methods for including multilingual information into the bilingual models from Godard et al. (2018). The \ufb01rst one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the \ufb01nal discovered boundaries. The voting is performed by applying an agreement threshold T over the output boundaries. This threshold balances between accepting all boundaries from all the bilingual models (zero agreement) and accepting only input boundaries discovered by all these models (total agreement). The second method is ANE Selection. For every language pair and aligned sentence in the dataset, a soft-alignment probability matrix is generated. We use Average Normalized Entropy (ANE) (Boito et al., 2019a) computed over these matrices for selecting the most con\ufb01dent one for segmenting each phoneme sequence. This exploits the idea that models trained on different language pairs will have language-related behavior, thus differing on the resulting alignment and segmentation over the same phoneme sequence. 3 Experiments The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from Boito et al. (2019a). Table 2 presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table 1 that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabularyrelated features can impact greatly the system\u2019s capacity to language-model, and consequently the \ufb01nal quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more dif\ufb01cult to model than others (Cotterell et al., 2018). For the multilingual selection experiments, we experimented combining the languages from top to bottom as they appear Table 2 (ranked by performance; e.g. 1-3 means the combination of FR(1), \fTable 3: Top 10 con\ufb01dent (discovered type, translation) pairs for the \ufb01ve bilingual models. The \u201c+\u201d mark means the discovered type is a concatenation of two existing true types. EN(2) and PT(3)). We observe that the performance improvement is smaller than the one observed in previous work (Boito et al., 2019b), which we attribute to the fact that our dataset was arti\ufb01cially augmented. This could result in the available multilingual form of supervision not being as rich as in a manually generated dataset. Finally, the best boundary segmentation result is obtained by performing multilingual voting with all the languages and an agreement of 50%, which indicates that the information learned by different languages will provide additional complementary evidence. Lastly, following the methodology from Boito et al. (2019a), we extract the most con\ufb01dent alignments (in terms of ANE) discovered by the bilingual models. Table 3 presents the top 10 most con\ufb01dent (discovered type, translation) pairs.3 Looking at the pairs the bilingual models are most con\ufb01dent about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation obo\u00e1+ng\u00e1). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, Haspelmath (2011) suggests the notion of word cannot always be meaningfully de\ufb01ned cross-linguistically. 4" + }, + { + "url": "http://arxiv.org/abs/1907.12895v3", + "title": "MaSS: A Large and Clean Multilingual Corpus of Sentence-aligned Spoken Utterances Extracted from the Bible", + "abstract": "The CMU Wilderness Multilingual Speech Dataset (Black, 2019) is a newly\npublished multilingual speech dataset based on recorded readings of the New\nTestament. It provides data to build Automatic Speech Recognition (ASR) and\nText-to-Speech (TTS) models for potentially 700 languages. However, the fact\nthat the source content (the Bible) is the same for all the languages is not\nexploited to date.Therefore, this article proposes to add multilingual links\nbetween speech segments in different languages, and shares a large and clean\ndataset of 8,130 parallel spoken utterances across 8 languages (56 language\npairs). We name this corpus MaSS (Multilingual corpus of Sentence-aligned\nSpoken utterances). The covered languages (Basque, English, Finnish, French,\nHungarian, Romanian, Russian and Spanish) allow researches on speech-to-speech\nalignment as well as on translation for typologically different language pairs.\nThe quality of the final corpus is attested by human evaluation performed on a\ncorpus subset (100 utterances, 8 language pairs). Lastly, we showcase the\nusefulness of the final product on a bilingual speech retrieval task.", + "authors": "Marcely Zanon Boito, William N. Havard, Mahault Garnerin, \u00c9ric Le Ferrand, Laurent Besacier", + "published": "2019-07-30", + "updated": "2020-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Recently, a remarkable work introduced the CMU Wilderness Multilingual Speech Dataset (Black, 2019).1 Based on readings of the New Testament from The Faith Comes By Hearing website, it provides data to build AutomaticSpeech-Recognition (ASR) and Text-to-Speech (TTS) models for potentially 700 languages. Such a resource allows the community to experiment and to develop speech technologies on an unprecedented number of languages. However, the fact that the initial language material from these monolingual corpora (the Bible) is the same for all languages, thus constituting a multilingual and comparable2 spoken corpus, is not exploited to date. Therefore, this article proposes an automatic pipeline for adding multilingual links between small speech segments in different languages. We apply our method to 8 languages (Basque, English, Finnish, French, Hungarian, Romanian, Russian and Spanish), resulting in 56 language pairs for which we obtain speech-to-speech, speech-to-text and textto-text alignments. In order to ensure the quality of the pipeline, a human evaluation was performed on a corpus subset (8 language pairs, 100 sentences) by bilingual native speakers. The current version of our dataset (named MaSS for Multilingual corpus of Sentence-aligned Spoken utterances) is freely available to the community, together with instructions and scripts allowing the pipeline extension to new languages.3 1Available at http://www.festvox.org/cmu_ wilderness/index.html 2Our de\ufb01nition of a comparable corpus in this work is the following: a non-sentence-aligned corpus, parallel at a broader granularity (e.g. chapter, document). 3Available at https://github.com/getalp/ mass-dataset We believe the obtained corpus can be useful in several applications, such as speech-to-speech retrieval (Lee et al., 2015), multilingual speech representation learning (Harwath et al., 2018a) and direct speech-to-speech translation (so far, mostly direct speech-to-text translation has been investigated (B\u00b4 erard et al., 2016; Weiss et al., 2017; Bansal et al., 2017; B\u00b4 erard et al., 2018)). Moreover, typological and dialectal \ufb01elds could use such a corpus to solve some of the following novel tasks using parallel speech: word alignment, bilingual lexicon extraction, and semantic retrieval. This paper is organized as follows: after brie\ufb02y presenting related works in Section 2, we review the dataset source and extraction pipeline in Section 3. Section 4 describes the human veri\ufb01cation performed and comments on some of the linguistic features present in the covered languages. Section 5 presents a possible application of the dataset: speechto-speech retrieval. Section 6 concludes this work. 2. Related Work 2.1. End-to-end Speech Translation Previous Automatic Speech-to-Text Translation (AST) systems operate in two steps: source language Automatic Speech Recognition (ASR) and source-to-target text Machine Translation (MT). However, recent works have attempted to build end-to-end AST without using source language transcription during learning or decoding (B\u00b4 erard et al., 2016; Weiss et al., 2017), or by using it at training time only (B\u00b4 erard et al., 2018). Very recently several extensions of these pioneering works were introduced: low-resource AST (Bansal et al., 2019), unsupervised AST (Chung et al., 2018), end-to-end speech-to-speech translation (Translatotron) (Jia et al., 2019b). Improvements for end-toend AST were also proposed by using weakly supervised data (Jia et al., 2019a), or by adding a second attention mechanism (Sperber et al., 2019). arXiv:1907.12895v3 [cs.CL] 26 Feb 2020 \f2.2. Multilingual Approaches Multilingual approaches for speech and language processing are growing ever more popular. They are made possible by the availability of massively parallel language resources covering an increasing number of languages of the world. These resources feed truly multilingual approaches, such as machine translation (Aharoni et al., 2019), syntax parsing (Nivre et al., 2016), automatic speech recognition (Schultz and Schlippe, 2014; Adams et al., 2019), lexical disambiguation (Navigli and Ponzetto, 2010; S\u00b4 erasset, 2015), and computational dialectology (Christodoulopoulos and Steedman, 2015). 2.3. Corpora for End-to-end Speech Translation To date, few datasets are available for multilingual automatic speech translation (only a few parallel corpora publicly available4). For instance, Fisher and Callhome Spanish-English corpora (Post et al., 2013) provide 38 hours of speech transcriptions of telephonic conversations aligned with their translations. However, these corpora are only medium size and contain low-bandwidth recordings. Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) also provides speech aligned to translated text, but this corpus is rather small (less than 8 hours per language). A 236 hours extension of Librispeech with French translations was proposed by Kocabiyiko\u02d8 glu et al. (2018). They exploited automatic alignment procedures, \ufb01rst at the text level (between transcriptions and translations), and then between the text and the corresponding audio segments. Inspired by this work, Di Gangi et al. (2019) created MuST-C, a multilingual speech translation corpus for training end-to-end AST systems from English into 8 languages.5 Similar in size, the English-Portuguese dataset How2 (Sanabria et al., 2018) was created by translating English short tutorials into Portuguese using a crowd-sourcing platform. More recently, Iranzo-S\u00b4 anchez et al. (2020) introduced a multilingual speech corpus including several source languages. The remark that can be made on all these corpora is that they are limited to Indo-European languages and thus typologically similar. 3. A Large and Clean Subset of Sentence Aligned Spoken Utterances (MaSS) In this section we present the source material for our multilingual corpus (Section 3.1.), we brie\ufb02y explain the CMU speech-to-text pipeline (Section 3.2.), and we detail our speech-to-speech pipeline (Section 3.3.). 3.1. The Source Material: Bible.is The Faith Comes By Hearing website6 (or simply bible.is) is an online platform that provides audio-books of the Bible with transcriptions in 1,294 languages. These recordings are a collection of \ufb01eld, virtual and partner recordings. In all cases, only native speakers participate in the recordings, and the number of different voices can go from one up to 4Table 1 in (Di Gangi et al., 2019) provides a good survey. 5Available at https://ict.fbk.eu/must-c 6Available at https://www.bible.is twenty \ufb01ve. Moreover, the recordings can be performed in drama and non-drama fashion, the former being an acted version of the text, corresponding to less tailored realizations. Finally, based on exchanges with the target users (the native community), background music can be added to the recordings.7 In summary, while the written content is always the same across different languages, the corresponding speech can be quite different in terms of realization (drama and non-drama), number of speakers, acoustic quality (\ufb01eld, virtual or partner recordings), and can sometimes contain background noise (music). 3.2. The CMU Wilderness Multilingual Speech Corpus The CMU Wilderness corpus (Black, 2019) is a speech dataset containing over 700 different languages for which it provides audio excerpts aligned with their transcription. Each language accounts for around 20 hours of data extracted from readings of the New Testament, and available at the bible.is website. Segmentation was made at the sentence level, and alignment between speech and corresponding text can be obtained with the pipeline provided along with the dataset. This pipeline, notably, can process a large amount of languages without using any extra resources such as acoustic models or pronunciation dictionaries. However, for most of the languages on the website, several recording versions are available, each of them having signi\ufb01cant differences in speech content, as explained in Section 3.1. As this pipeline extracted the soundtracks from the defaults links, audio excerpts often contain music, and it is unknown if drama or non-drama versions were selected. Thus, although the quality of the alignment is good for many languages, it could be inaccurate (or noisy) for an unknown subset. Lastly, the \ufb01nal segmentation from chapters was obtained through the use of punctuation marks. While ef\ufb01cient for a speech-to-text monolingual scenario, this strategy does not allow accurate multilingual alignment, since different languages and translations may result in different sentence segmentation and ordering. 3.3. Our Pipeline: from Speech-to-text to Speech-to-speech Alignment As far as multilingual alignment is concerned, Bible chapters are inherently aligned at the chapter level. But Bible chapters are very long excerpts, with an average duration of 5 minutes. Alignments at this broad granularity are not relevant for research in speech-to-speech translation or speechto-speech alignment. Thus, we propose a new extraction methodology that allows us to obtain fully aligned speech segments at a much smaller granularity (segments between 8 to 10 seconds). Our pipeline is summarized in Figure 1 and described below. 3.3.1. Alignment pipeline 1. Extracting clean spoken chapters. Starting from the pipeline described in the last section, which provides scripts 7More information available at https://www. faithcomesbyhearing.com/mission/recordings \fFigure 1: The pipeline for a given language in the bible.is website. for downloading audio data and transcriptions from the bible.is website, we downloaded all the 260 chapters from the New Testament in several languages. We selected (after having manually sampled the website) non-drama versions (as opposed to drama) that contain standard speech and pronunciation, and mostly, no background music. The audios are also converted from stereo to mono for the purpose of the following steps. 2. Aligning speech and text for each chapter. For each chapter, we extracted speech-to-text alignments through the Maus forced aligner8 (Kisler et al., 2017) online platform. During this step, we kept languages with good audio quality and for which an acoustic model was available in the offthe-shelf forced aligner tool. Our \ufb01nal set was reduced to the following eight languages: Basque, English, Finnish, French, Hungarian, Romanian, Russian and Spanish. 3. Segmenting chapters into verses. Any written chapter of the Bible is inherently segmented into verses. A verse is the minimal segmentation unit used in the Bible and corresponds to a sentence, or more rarely to a phrase or a clause. In order to segment our audio \ufb01les in such smaller units, we aligned our TextGrid \ufb01les (from step 2) with a written version of the Bible containing verse information. This alignment is rather trivial, since, after removing punctuation, both texts have the same content. After this step, all audio chapters are segmented into verses and receive IDs based on their English chapter name, and their verse number (e.g. \u201cMatthew chapter1 verse3\u201d). 8Available at https://clarin.phonetik. uni-muenchen.de/BASWebServices/interface/ WebMAUSBasic 3.3.2. Result and Comparison Considering that all chapters consist of the same set of verses, the verse numbers give us a multilingual alignment between all verses for all the language pairs.9 Thus, the output of our pipeline is a set of 8,160 audios segments, aligned at verse-level, in eight different languages, with an average of 20 hours of speech for each language. Finally, corpus statistics are presented in Table 1. For justifying the need of extending the approach presented in Section 3.2, Table 2 presents a comparison between our corpus (bottom) output and theirs (top). This comparison takes the speech \ufb01le numbering on their pipeline as multilingual alignment clue, since no other information is available. We can observe that by segmenting based on punctuation, the multilingual alignment quickly becomes incorrect: the segmentation on the third \ufb01le, based on a punctuation mark not present in the English text, shifts the alignment for the rest of the chapter. 3.3.3. Reproducibility The presented pipeline performs automatic verse-level alignment using Bible chapters. All the scripts used in this work are available, together with the resulting dataset.3 For extending it to a new language, here are some recommendations: \u2022 Bible version: as discussed in Section 3.1, a language can have several versions available on the website. For ensuring the best quality possible, manual inspection in one chapter can be quickly performed to identify a non-drama version, but it is not mandatory. \u2022 Alignment Tool: for generating verse-level alignment, a chapter-level alignment between speech and text is needed. While we use the Maus forced aligner for this task, any aligner able to provide a TextGrid \ufb01le as output can be used at this stage. 4. Resource Evaluation and Analysis 4.1. Human Evaluation: Speech Alignment Quality Having obtained multilingual alignments between spoken utterances, we attest their quality by performing a human evaluation on a corpus subset, covering the eight language pairs for which we were able to \ufb01nd bilingual judges. We implemented an online evaluation platform with 100 randomly selected verses in these 8 different language pairs. Judges were asked to evaluate the spoken alignments using a scale from 1 to 5 (1 meaning the two audio excerpts do not have any information in common, and 5 meaning they are perfectly aligned). Aiming at the most uniform evaluation possible, we provided guidelines and examples to our evaluators. Transcriptions were also displayed as a cognitive support in evaluation. The eight language pairs are the following: FrenchEnglish (FR-EN), French-Spanish (FR-ES), FrenchRomanian (FR-RO), English-Spanish (EN-ES), English9This is mostly true, but for a small subset of chapters, due to different Bible versions and different translation approaches, the number of aligned speech verses will differ slightly. \fLanguages # types # tokens Types per verse Tokens per verse Avg. token length Audio length (h) Avg. verse length (s) (EN) English 6,471 176,461 18.03 21.52 3.82 18.50 8.27 (ES) Spanish 11,903 168,255 17.90 20.52 4.17 21.49 9.58 (EU) Basque 14,514 128,946 14.88 15.78 5.55 22.76 9.75 (FI) Finnish 18,824 134,827 15.04 16.44 5.66 23.16 10.21 (FR) French 10,080 183,786 19.25 22.36 4.02 19.41 8.62 (HU) Hungarian 20,457 135,254 15.01 16.46 5.07 21.12 9.29 (RO) Romanian 9,581 169,328 18.19 20.61 4.14 23.11 10.16 (RU) Russian 16,758 129,973 14.50 15.82 4.44 22.90 9.70 Table 1: Statistics of the MaSS corpus. Alignment from Black (2019) Files French English 00001 Matthieu Matthew 00002 J\u00b4 esus descend de la montagne et des foules nombreuses le suivent. When he came down from the mountainside, large crowds followed him. 00003 Un l\u00b4 epreux s\u2019approche, il se met ` a genoux devant J\u00b4 esus et lui dit : A man with leprosy came and knelt before him and said, \u201cLord, if you are willing, you can make me clean.\u201d 00004 Seigneur, si tu le veux, tu peux me gu\u00b4 erir ! Jesus reached out his hand and touched the man. \u201cI am willing,\u201d he said. \u201cBe clean!\u201d Immediately he was cured of his leprosy. Our alignment Verses French English 00 Matthieu 8 Matthew 8 01 Lorsque J\u00b4 esus fut descendu de la montagne une grande foule le suivit When he came down from the mountain great crowds followed him 02 Et voici un l\u00b4 epreux s\u2019\u00b4 etant approch\u00b4 e se prosterna devant lui et dit : Seigneur si tu le veux tu peux me rendre pur And behold a leper came to him and knelt before him saying Lord if you will you can make me clean 03 J\u00b4 esus \u00b4 etendit la main le toucha et dit : Je le veux sois pur Aussit\u02c6 ot il fut puri\ufb01\u00b4 e de sa l` epre And Jesus stretched out his hand and touched him saying I will be clean And immediately his leprosy was cleansed Table 2: A comparison between CMU\u2019s multilingual alignment and ours. Text in italic shows alignment mismatches between English and French. We used a slightly different (non-drama) version of the Bible, hence the small differences in the displayed texts. Finnish (EN-FI), English-Hungarian (EN-HU), EnglishRomanian (EN-RO) and English-Russian (EN-RU). This selection is a trade-off between the dif\ufb01culty of \ufb01nding judges and the desire to provide a good typological variety in our evaluation data. Basque was also chosen due to the fact it is language isolate, that is, a language that has no known connection to any other language. However, we were unable to \ufb01nd judges to perform the evaluation on any language pair including it. Table 3 summarizes the results of the human evaluation. Evaluation scores are good, with a mean value of 4.41. Moreover, for every language pair evaluated (except for FR-ES), the median score is the maximum score, hence con\ufb01rming the quality of the alignment. However, when trying to quantify rater\u2019s agreement, we obtained mixed results. Percentage of agreement with tolerance 1 (meaning raters differing by one-scale degree are interpreted as agreeing) varies from 59.6% (EN-RO) to 95.96% (EN-HU). 4.2. Corpus Linguistic Analysis Regarding content, the corpus features languages belonging to different families. These are listed as follows: \u2022 Indo-European: \u2013 Romance: French, Romanian, Spanish \u2013 Germanic: English \u2013 Slavic: Russian \u00af x \u03c3 med min max # Eval. EN ES 4.56 0.62 5 3 5 2 EN FI 4.37 0.92 5 1 5 1 EN HU 4.44 0.88 5 1 5 2 EN RO 4.24 0.97 5 1 5 6 EN RU 4.56 0.83 5 1 5 3 FR EN 4.38 0.79 5 1 5 5 FR ES 4.22 0.89 4 2 5 2 FR RO 4.51 0.90 5 1 5 1 All 4.36 0.88 5 1 5 22 Table 3: Result of the manual inspection of the speech alignment quality performed on 8 language pairs (100 sentences). Scale is from 1 to 5 (higher is better). Last column refers to the number of evaluators for a given language pair. \u2022 Uralic: \u2013 Ugric: Hungarian \u2013 Finnic: Finnish \u2022 Language Isolate: Basque It should be noted that these languages are very different from a typological point of view. First of all, Basque, Finnish, Hungarian, Romanian and Russian mainly use case marking to indicate the function of a word10 in a sen10Case markers are small grammatical morphemes added to a \ftence, while English, French and Spanish rely on word position and prepositions for the same purposes. Basque, Finnish and Hungarian are agglutinative languages, while English, French, Romanian, Russian and Spanish are fusional languages. Thus, for the former group, grammatical markers will bear only one meaning, while in the latter, grammatical markers will bear several meanings at the same time.11 Basque is even more special as this language features ergative-absolutive marking while the other languages use nominative-accusative marking. In languages using ergative-absolutive marking, the subject of an intransitive verb and the patient of a transitive verb are treated alike and receive the same case marker, while the agent of a transitive verb is treated differently than the subject of an intransitive verb. Romanian also presents an interesting morphological characteristic regarding determiners: the de\ufb01nite article is suf\ufb01xed to the word whereas inde\ufb01nite articles are usually pre\ufb01xed, for instance: \u201cun-b\u02d8 aiat\u201d (INDEF-boy: \u201ca boy\u201d) and \u201cb\u02d8 aiat-ul\u201d (boy-DEF: \u201cthe boy\u201d). Finnish and Russian on the other hand do not have any article, neither de\ufb01nite nor inde\ufb01nite. Another interesting linguistic phenomenon to observe is the existence of grammatical genders. Russian features three genders (feminine, masculine and neutral) whereas French features only two (feminine and masculine), and Basque and Finnish present no grammatical genders at all. From a syntactic point of view, English, French and Spanish have a relatively \ufb01xed word order (and mainly follow the SubjectVerb-Object (SVO) pattern), while word order is more \ufb02exible in Basque, Finnish, Hungarian, Romanian and Russian, mainly due to the fact that these languages use case markers. Due to all the diverse linguistic features described in this section, we believe this dataset could be used for a wide variety of tasks, such as natural language grammar induction from raw speech, automatic typological features retrieval, speech-to-speech translation, and speech-to-speech retrieval. The latter is illustrated on Section 5. Moreover, this dataset could also serve as a benchmark for evaluating computational language documentation techniques that work on speech inputs. 5. Use Case: Multilingual Speech Retrieval Task Baseline In this section we showcase the usefulness of our corpus on a multilingual setting. We perform speech-tospeech retrieval by adapting a model for visually grounded speech (Harwath et al., 2018b), and we discuss the results for our baseline model. word to indicate its grammatical function (eg. subject, object, etc.) within a clause/sentence. 11Compare Hungarian \u201ch\u00b4 az-ak-nak\u201d (house-PL-DAT) and Russian \u201c\u0434\u043e\u043c-\u0430\u043c\u201d (house-PL.DAT). Words in agglutinative languages are comparatively longer than their equivalent in fusional languages. 5.1. Task and Model De\ufb01nition For performing multilingual speech retrieval, we adapted the model12 proposed by Harwath et al. (2018b). This model was primarily designed to retrieve images from speech utterances, and it is made of two networks: a speech and a image encoder. By projecting both representations to the same shared space, the model is thus able to learn the relationship between speech segments and the image contents. For our speech-to-speech task, we replaced the image encoder by a (second) speech encoder.13 Both speech encoders consist of a convolution bank (Wang et al., 2017) followed by two layers of bidirectional LSTM (Hochreiter and Schmidhuber, 1997), and of an attention mechanism (Bahdanau et al., 2015) which computes a weighted average of the LSTM\u2019s activations. The convolution bank consists of a set of K = 16 1D-convolution \ufb01lters, where the kth convolution has a kernel of width k. Each convolution \ufb01lter consists of 40 units with ReLU activation and stride of 1. The batch-normed output of each convolution is then stacked and the resulting matrix is linearly projected to \ufb01t the LSTM\u2019s input dimension of size 256. Our model\u2019s inputs are Mel \ufb01lterbank spectrograms (40 mel coef\ufb01cients with a Hamming window size of 25ms and stride of 10ms) extracted from raw speech. The network is trained to minimize the contrastive loss function in Equation 1, which minimizes the cosine distance d between a verse in a given language A, and its corresponding verse in a given language B. It does so by maximizing the distance between mismatching verses pairs (with a given margin \u03b1). Thus, verses corresponding to direct translations should lie close in the embedding space. Finally, contrary to Harwath et al. (2018a), in which only one negative example for caption is sampled, we adopted the method from Chrupa\u0142a et al. (2017), considering every other verse in the batch as a negative example. L(vA, vB, \u03b1) = X vA,vB X v\u2032 A max[0, \u03b1 + d(vA, vB) \u2212d(v\u2032 A, vB)] + X v\u2032 B max[0, \u03b1 + d(vA, vB) \u2212d(vA, v\u2032 B)] ! (1) 5.2. Results We trained an instance of this model for seven language pairs, always keeping English as source language. The 8,160 common verses were randomly split between train (80%), validation (10%) and test (10%) sets. Batches were of size 16, and models were all trained for 100 epochs. Table 4 presents our results for the retrieval task. Results show that, while such a speech-to-speech task is challenging, it is possible to obtain bilingual speech embeddings that perform reasonably well on a multilingual retrieval task. The recall and rank results are far above the 12Available at https://github.com/dharwath/ DAVEnet-pytorch 13Modi\ufb01ed code available at https://github.com/ getalp/BibleNet \fQuery R@1 R@5 R@10 e r EN-EU 0.173 0.395 0.523 9 EN-ES 0.130 0.341 0.469 12 EN-HU 0.116 0.319 0.455 13 EN-RU 0.102 0.308 0.414 16 EN-RO 0.085 0.289 0.396 17 EN-FR 0.092 0.259 0.364 22 EN-FI 0.076 0.202 0.293 26 Table 4: Recall at top 1, 5, and 10 retrieval. Median rank e r on a verse-to-verse retrieval task is also provided. Results are reported on the test set (816 verses). Chance recalls are 0.001 (R@1), 0.006 (R@5) and 0.012 (R@10). Chance median e r is 408.5. chance values. We also scored a simple baseline that uses utterance length to retrieve spoken verses (in other words, it uses only distance between spoken utterances\u2019 lengths to solve the retrieval task). With this baseline, medium ranks are better than chance level (e r = 408) but vary from e r = 136 (EN-FR) to e r = 219 (EN-FI), which is very poor compared to our baseline model. Interestingly, our best results, obtained for EN-EU (e r = 9) and EN-ES (e r = 12), illustrate that speech-to-speech retrieval task is feasible even for pairs of typologically different languages. Following this experiment, we investigated the correlation between the median rank and two variables: the quality of the alignment (human evaluation) and the syntactic distance between the language pairs (using the lang2vec library (Littell et al., 2017)). Results are provided at Table 5. While there is no correlation between the rank and the syntactic distance, there is a strong negative correlation with respect to the human evaluation (signi\ufb01cant for p < 0.1). One possible explanation for this result may be that higher quality alignments (measured by the human evaluation e x) lead to a slightly easier corpus for the speech-speech retrieval task (dif\ufb01culty being measured by the rank e r). If con\ufb01rmed, this result would suggest that speech-to-speech retrieval scores are a good proxy for rating alignment corpus quality, as performed for text by Schwenk et al. (2019) through the use of NMT. Languages e r Quality (\u00af x) Syntactic dist. EN EU 9 NA 0.61 EN ES 12 4.56 0.40 EN HU 13 4.44 0.57 EN RU 16 4.56 0.49 EN RO 17 4.51 0.53 EN FR 22 4.38 0.46 EN FI 26 4.37 0.53 Correlation -0.76 -0.21 Table 5: Correlation between median rank and 1) alignment quality (from manual evaluation) 2) syntactic distance between languages (measured with lang2vec). 6." + }, + { + "url": "http://arxiv.org/abs/1907.00184v2", + "title": "Empirical Evaluation of Sequence-to-Sequence Models for Word Discovery in Low-resource Settings", + "abstract": "Since Bahdanau et al. [1] first introduced attention for neural machine\ntranslation, most sequence-to-sequence models made use of attention mechanisms\n[2, 3, 4]. While they produce soft-alignment matrices that could be interpreted\nas alignment between target and source languages, we lack metrics to quantify\ntheir quality, being unclear which approach produces the best alignments. This\npaper presents an empirical evaluation of 3 main sequence-to-sequence models\n(CNN, RNN and Transformer-based) for word discovery from unsegmented phoneme\nsequences. This task consists in aligning word sequences in a source language\nwith phoneme sequences in a target language, inferring from it word\nsegmentation on the target side [5]. Evaluating word segmentation quality can\nbe seen as an extrinsic evaluation of the soft-alignment matrices produced\nduring training. Our experiments in a low-resource scenario on Mboshi and\nEnglish languages (both aligned to French) show that RNNs surprisingly\noutperform CNNs and Transformer for this task. Our results are confirmed by an\nintrinsic evaluation of alignment quality through the use of Average Normalized\nEntropy (ANE). Lastly, we improve our best word discovery model by using an\nalignment entropy confidence measure that accumulates ANE over all the\noccurrences of a given alignment pair in the collection.", + "authors": "Marcely Zanon Boito, Aline Villavicencio, Laurent Besacier", + "published": "2019-06-29", + "updated": "2019-09-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Sequence-to-Sequence (S2S) models can solve many tasks where source and target sequences have different lengths. For learning to focus on speci\ufb01c parts of the input at decoding time, most of these models are equipped with attention mechanisms [1, 2, 3, 4, 6]. By-products of the attention are softalignment probability matrices, that can be interpreted as alignment between target and source. However, we lack metrics to quantify their quality. Moreover, while these models perform very well in a typical use case, it is not clear how they would be affected by low-resource scenarios. This paper proposes an empirical evaluation of well-known S2S models for a particular S2S modeling task. This task consists of aligning word sequences in a source language with phoneme sequences in a target language, inferring from it word segmentation on the target side [5]. We concentrate on three models: Convolutional Neural Networks (CNN) [2], Recurrent Neural Networks (RNN) [1] and Transformer-based models [3]. While this word segmentation task can be used for the extrinsic evaluation of the soft-alignment probability matrices produced during S2S learning, we also introduce Average Normalized Entropy (ANE), a task-agnostic con\ufb01dence metric to quantify the quality of the source-to-target alignments obtained. Experiments performed on a low-resource scenario for two languages (Mboshi and English) using equivalently sized corpora aligned to French, are, to our knowledge, the \ufb01rst empirical evaluation of these well-known S2S models for a word segmentation task. We also illustrate how our entropy-based metric can be used in a language documentation scenario, helping a linguist to ef\ufb01ciently discover types, in an unknown language, from an unsegmented sequence of phonemes. This work is thus also a contribution to the emerging computational language documentation domain [7, 8, 9, 10, 11], whose main goal is the creation of automatic approaches able to help the documentation of the many languages soon to be extinct [12]. Lastly, studies focused on comprehensive attention mechanisms for NMT [13, 14, 15] lack evaluation of the resulting alignments, and the exceptions [16] do so for the task of wordto-word alignment in well-resourced languages. Differently, our work is not only an empirical evaluation of NMT models focused on alignment quality, but it also tackles data scarcity of low-resource scenarios. 2. Experimental Settings 2.1. Unsupervised Word Segmentation from Speech As in language documentation scenarios available corpora usually contain speech in the language to document aligned with translations in a well-resourced language, Godard et al. [5] introduced a pipeline for performing Unsupervised Word Segmentation (UWS) from speech. The system outputs timestamps delimiting stretches of speech, associated with class labels, corresponding to real words in the language. The pipeline consists of \ufb01rst transforming speech into a sequence of phonemes, either through Automatic Unit Discovery (e.g. [17]) or manual transcription. The phoneme sequences, together with their translations, are then fed to an attention-based S2S system that produces soft-alignment probability matrices between target and source languages. The alignment probability distributions between the phonemes and the translation words (as in Figure 1) are used to cluster (segment) together neighbor phonemes whose alignment distribution peaks at the same word. The \ufb01nal speech segmentation is evaluated using the Zero Resource Challenge1 (ZRC) 2017 evaluation suite (track 2).2 1Available at http://zerospeech.com/2017. 2 We increment over [5] by removing silence labels before training, and using them for segmentation. This results in slightly better scores. arXiv:1907.00184v2 [cs.CL] 11 Sep 2019 \fFigure 1: Soft-alignment probability matrices from the UWS task. ANE values (from left to right) are 0.11, 0.64 and 0.83. The gold segmentation is \u201cBAH1T MAA1MAH0 PAA1PAH0 IH0Z AW1T\u201d, which corresponds to the English sentence \u201cBut mama, papa is out\u201d. 2.2. Parallel Speech Corpora The parallel speech corpora used in this work are the EnglishFrench (EN-FR) [18] and the Mboshi-French (MB-FR) [19] parallel corpora. EN-FR corpus is a 33,192 sentences multilingual extension from librispeech [20], with English audio books automatically aligned to French translations. MB-FR is a 5,130 sentences corpus from the language documentation process of Mboshi (Bantu C25), an endangered language spoken in Congo-Brazzaville. Thus, while the former corpus presents larger vocabulary and longer sentences, the latter presents a more tailored environment, with short sentences and simpler vocabulary. In order to provide a fair comparison, as well as to study the impact of corpus size, the EN-FR corpus was also down-sampled to 5K utterances (to the exact same size than the MB-FR corpus). Sub-sampling was conducted preserving the average number of tokens per sentence, shown in Table 1. 2.3. Introducing Average Normalized Entropy (ANE) In this paper, we focus on studying the soft-alignment probability matrices resulting from the learning of S2S models for the UWS task. To assess the overall quality of these matrices without having gold alignment information, we introduce Average Normalized Entropy (ANE). De\ufb01nition: Given the source and target pair (s, t) of lengths |s| and |t| respectively, for every phone ti, the normalized entropy (NE) is computed considering all possible words in s (Equation 1), where P(ti, sj) is the alignment probability between the phone ti and the word sj (a cell in the matrix). The ANE for a sentence is then de\ufb01ned by the arithmetic mean over the resulting NE for every phone from the sequence t (Equation 2). NE(ti, s) = \u2212 |s| X j=1 P(ti, sj) \u00b7 log|s|(P(ti, sj)) (1) ANE(t, s) = P|t| i=1 NE(ti, s) |t| (2) From this de\ufb01nition, we can derive ANE for different granularities (sub or supra-sentential) by accumulating its value for the full corpus, for a single type or for a single token. Corpus ANE will be used to summarize the overall performance of a S2S model on a speci\ufb01c corpus. Token ANE extends ANE to tokens by averaging NE for all phonemes from a single (discovered) token. Type ANE results from averaging the ANE for every token instance of a discovered type. Finally, Alignment ANE is the result of averaging the ANE for every discovered (type, translation word) alignment pair. Intuition that lower ANEs correspond to better alignments is exempli\ufb01ed in Figure 1. 3. Empirical Comparison of S2S Models We compare three NMT models \u00a73.1, \u00a73.2, \u00a73.3) for UWS, focusing on their ability of aligning words (French) with phonemes (English or Mboshi) in medium-low resource settings. The results, an analysis of the impact of data size and quality, and the correlation between intrinsic (ANE) and extrinsic (boundary F-score) metrics are presented in \u00a73.4. The application of ANE for type discovery in low-resource settings is presented in \u00a73.5. 3.1. RNN: Attention-based Encoder-Decoder The classic RNN encoder-decoder model [1] connects a bidirectional encoder with an unidirectional decoder by the use of an alignment module. The RNN encoder learns annotations for every source token, and these are weighted by the alignment module for the generation of every target token. Weights are de\ufb01ned as context vectors, since they capture the importance of every source token for the generation of each target token. Attention mechanism: a context vector for a decoder step t is computed using the set of source annotations H and the last state of the decoder network (translation context). The attention is the result of the weighted sum of the source annotations H (with H = h1, ..., hA) and their \u03b1 probabilities (3) obtained through a feed-forward network align (4). ct = Att(H, st\u22121) = A X i=1 \u03b1t ihi (3) \u03b1t i = softmax(align(hi, st\u22121)) (4) 3.2. Transformer Transformer [3] is a fully attentional S2S architecture, which has obtained state-of-the-art results for several NMT shared tasks. It replaces the use of sequential cell units (such as LSTM) by Multi-Head Attention (MHA) operations, which make the architecture considerably faster. Both encoder and decoder networks are stacked layers sets that receive source and target sequences, embedded and concatenated with positional encoding. An encoder layer is made of two sub-layers: a Self-Attention MHA and a feed-forward. A decoder layer is made of three sub-layers: a masked Self-Attention MHA (no access to subsequent positions); an Encoder-Decoder MHA (operation over the encoder stack\u2019s \ufb01nal output and the decoder self-attention output); and a feed-forward sub-layer. Dropout and residual connections are applied between all sub-layers. Final output probabilities are generated by applying a linear projection over the decoder stack\u2019s output, followed by a softmax operation. Multi-head attention mechanism: attention is seen as a mapping problem: given a pair of key-value vectors and a query vector, the task is the computation of the weighted sum of the given values (output). In this setup, weights are learned by compatibility functions between key-query pairs (of dimension dk). For a given query (Q), keys (K) and values (V) set, the Scaled \fTable 1: Statistics of the three source-target data sets. #types #tokens average( token length) average( #tokens / sentence) corpus source target source target source target source target EN-FR (33k) 21,083 33,135 381,044 467,475 4.37 4.57 11.48 14.08 EN-FR (5k) 8,740 12,226 59,090 72,670 4.38 4.57 11.52 14.17 MB-FR (5k) 6,633 5,162 30,556 42,715 4.18 4.39 5.96 8.33 Dot-Product (SDP) Attention function is computed as: Att(V, K, Q) = softmax(QKT \u221adk )V (5) In practice, several attentions are computed for a given QKV set. The QKV set is \ufb01rst projected into h different spaces (multiple heads), where the SDP attention is computed in parallel. Resulting values for all heads are then concatenated and once again projected, yielding the layer\u2019s output. (6) and (7) illustrate the process, in which H is the set of h heads (H = h1, ..., hh) and f is a linear projection. Self-Attention de\ufb01nes the case where query and values come from same source (learning compatibility functions within the same sequence of elements). MultiHead(V, K, Q) = f(Concat(H)) (6) hi = Att(fi(V ), fi(K), fi(Q)) (7) 3.3. CNN: Pervasive Attention Different from the previous models, which are based on encoder-decoder structures interfaced by attention mechanisms, this approach relies on a single 2D CNN across both sequences (no separate coding stages) [2]. Using masked convolutions, an auto-regressive model predicts the next output symbol based on a joint representation of both input and partial output sequences. Given a source-target pair (s, t) of lengths |s| and |t| respectively, tokens are \ufb01rst embed in ds and dt dimensional spaces via look-up tables. Token embeddings {x1, . . . , x|s|} and {y1, . . . , y|t|} are then concatenated to form a 3D tensor X \u2208R|t|\u00d7|s|\u00d7f0, with f0 = dt + ds, where: Xij = [yi xj] (8) Each convolutional layer l \u2208{1, . . . , L} of the model produces a tensor Hl of size |t| \u00d7 |s| \u00d7 fl, where fl is the number of output channels for that layer. To compute a distribution over the tokens in the output vocabulary, the second dimension of the tensor is used. This dimension is of variable length (given by the input sequence) and it is collapsed by max or average pooling to obtain the tensor HPool L of size |t|\u00d7fL. Finally, 1\u00d71 convolution followed by a softmax operation are applied, resulting in the distribution over the target vocabulary for the next output token. Attention mechanism: joint encoding acts as an attention-like mechanism, since individual source elements are re-encoded as the output is generated. The self-attention approach of [21] is applied. It computes the attention weight tensor \u03b1, of size |t| \u00d7 |s|, from the last activation tensor HL, to pool the elements of the same tensor along the source dimension, as follows: \u03b1 = softmax (W1 tanh (HLW2)) (9) HAtt L = \u03b1HL. (10) where W1 \u2208Rfa and W2 \u2208Rfa\u00d7fL are weight tensors that map the fL dimensional features in HL to the attention weights via an fa dimensional intermediate representation. 3.4. Comparing S2S Architectures For each S2S architecture, and each of the three corpora, we train \ufb01ve models (runs) with different initialization seeds.3 Before segmenting, we average the produced matrices from the \ufb01ve different runs as in [5]. Evaluation is done in a bilingual segmentation condition that corresponds to the real UWS task. In addition, we also perform segmentation in a monolingual condition, where a phoneme sequence is segmented with regards to the corresponding word sequence (transcription) in the same language (hence monolingual).4 Our networks are optimized for the monolingual task. Across all architectures, we use embeddings of size 64 and batch size of 32 (5K data set), or embeddings of size 128 and batch size of 64 (33K data set). Dropout of 0.5 and early-stopping procedure are applied in all cases. RNN models have only one layer, a bi-directional encoder, and cell size equal to the embedding size, as in [5]. CNN models use the hyper-parameters from [2] with only 3 layers (5K data set), or 6 (33K data set), and kernel size of 3. Transformer models were optimized starting from the original hyperparameters of [3]. Best results (among 50 setups) were achieved using 2 heads, 3 layers (encoder and decoder), warm-up of 5K steps, and using cross-entropy loss without label-smoothing. Finally, for selecting which head to use for UWS, we experimented using the last layer\u2019s averaged heads, or by selecting the head with minimum corpus ANE. While the results were not signi\ufb01cantly different, we kept the ANE selection. 3.4.1. Unsupervised Word Segmentation Results The word boundary F-scores5 for the task of UWS from phoneme sequence (in Mboshi or English) are presented in Table 2, with monolingual results shown for information only (topline). Surprisingly, RNN models outperform the more recent (CNN and Transformer) approaches. One possible explanation is the lower number of parameters (for a 5K setup, in average 700K parameters are trained, while CNN needs an additional 30.79% and Transformer 5.31%). However, for 33K setups, CNNs actually need 30% less parameters than RNNs, but still perform worse. Transformer\u2019s low performance could be due to the use of several heads \u201cdistributing\u201d alignment information across different matrices. Nonetheless, we evaluated averaged heads and single-head models, and these resulted in signi\ufb01cant decreases in performance. This suggests that this architecture may not need to learn explicit alignment to translate, but instead it could be capturing different kinds of linguistic information, as discussed in the original paper and in its examples [3]. Also, on the decoder side, the behavior of the selfattention mechanism on phoneme units is unclear and under3RNN, CNN and Transformer implementations from [22, 2, 23] respectively. 4This task can be seen as an automatic extraction of a pronunciation lexicon from parallel words/phonemes sequences. 5For CNN and RNN, average standard deviation for the bilingual task is of less than 0.8%. For Transformer, it is almost 4%. \fTable 2: Boundary F-scores for the UWS task. Bilingual Monolingual EN 33K RNN 77.10 99.80 CNN 71.30 98.60 Transformer 52.70 94.90 EN 5K RNN 70.40 99.30 CNN 55.90 98.80 Transformer 52.50 80.90 MB 5K RNN 74.00 92.50 CNN 68.20 89.80 Transformer 66.40 83.50 studied so far. For the encoder, Voita et al. [15] performed aftertraining encoder head removal based on head con\ufb01dence, showing that after initial training, most heads were not necessary for maintaining translation performance. Hence, we \ufb01nd the Multi-head mechanism interpretation challenging, and maybe not suitable for a direct word segmentation application, such as our method. As in [24], our best UWS method (RNN) for the bilingual task does not reach the performance level of a strong Bayesian baseline [25] with F-scores of 89.80 (EN33K), 87.93 (EN5K), and 77.00 (MB5K). However, even if we only evaluate word segmentation performance, our neural approaches learn to segment and align, whereas this baseline only learns to segment. Section 3.5 will leverage those alignments for a type discovery task useful in language documentation. The Pearson\u2019s \u03c1 correlation coef\ufb01cients between ANE and boundary F-scores for all mono and bilingual runs of all corpora (N = 30) are \u22120.98 (RNN), \u22120.97 (CNN), and \u22120, 66 (Transformer), with p-values smaller than 10\u22125. These strong negative correlations con\ufb01rm our hypothesis that lower ANEs correspond to sharper and better alignments. 3.4.2. Impact of Data Size and Quality EN33K and EN5K results of Table 2 allow us to analyze the impact of data size on the S2S models. For the bilingual task, RNN performance drops by 7% on average, whereas performance drop is bigger for CNN (14-15%). Transformer performs poorly in both cases, and increasing data size from 5K to 33K seems to help only for a trivial task (see monolingual results). The EN5K and MB5K results of Table 2 re\ufb02ect the impact of language pairs on the S2S models. We know from [26, 27] that English should be easier to segment than Mboshi, and this was con\ufb01rmed by both dpseg and monolingual results. However, this trend is not con\ufb01rmed in the bilingual task, where the quality of the (sentence aligned) parallel corpus seems to have more impact (higher boundary F-scores for MB5K than for EN5K for all S2S models). As shown in Table 1, MB-FR corpus has shorter sentences and smaller lexicon diversity, while ENFR is made of automatically aligned books (noisy alignments), what may explain our experimental results. 3.5. Type Discovery in Low-Resource Settings We investigate the use of Alignment ANE as a con\ufb01dence measure. From the RNN models, we extract and rank the discovered types by their ANE, and examine if it can be used to separate true words in the discovered vocabulary from the rest. The results for low-resource scenarios (only 5K) in Table 3 suggest that low ANE corresponds to the portion of the discovTable 3: Type retrieval results (RNN) using ANE for keeping most con\ufb01dent (type, translation) pairs. For instance, ANE = 0.4 means all discovered types have ANE \u22640.4. EN 5K MB 5K ANE P R F P R F 0.1 70.97 0.50 1.00 72.13 0.57 1.12 0.2 55.43 3.85 7.20 49.02 2.89 5.46 0.3 44.99 12.51 19.58 38.18 8.14 13.41 0.4 32.81 21.76 26.17 32.63 16.61 22.01 0.5 23.37 28.17 25.54 27.93 23.44 25.49 0.6 18.54 32.41 23.59 24.73 27.61 26.09 0.7 16.23 34.34 22.04 23.00 30.12 26.08 0.8 15.21 35.16 21.23 22.17 30.95 25.84 0.9 15.01 35.31 21.06 22.06 31.05 25.80 all 15.01 35.34 21.07 22.06 31.05 25.80 Table 4: Top 5 low and high ANE ranking for the discovered types (EN5K), with gold transcription and aligned information between parentheses (respectively). \u201cINV\u201d means incorrect type. Top Low ANE Top High ANE 1 SER1 (sir, EOS token) AH0 (a, convenable) 2 HHAH1SH (hush, chut) IH1 (INV, ah) 3 FIH1SHER0 (\ufb01sher, \ufb01sher) D (INV, riant) 4 KLER1K (clerk, clerc) N (INV, obit) 5 KIH1S (kiss, embrasse) YUW1 (you, diable) ered vocabulary the network is con\ufb01dent about, and these are, in most of the cases, true discovered lexical items (\ufb01rst row, P \u226570%).6 As we keep higher Alignment ANE values, we increase recall but loose precision. This suggests that, in a documentation scenario, ANE could be used as a con\ufb01dence measure by a linguist to extract a list of types with higher precision, without having to pass through all the discovered vocabulary. Moreover, as exempli\ufb01ed for EN5K in Table 4, we also retrieve aligned information (translation candidates) for the generated lexicon. 4." + }, + { + "url": "http://arxiv.org/abs/1807.10740v1", + "title": "A small Griko-Italian speech translation corpus", + "abstract": "This paper presents an extension to a very low-resource parallel corpus\ncollected in an endangered language, Griko, making it useful for computational\nresearch. The corpus consists of 330 utterances (about 20 minutes of speech)\nwhich have been transcribed and translated in Italian, with annotations for\nword-level speech-to-transcription and speech-to-translation alignments. The\ncorpus also includes morphosyntactic tags and word-level glosses. Applying an\nautomatic unit discovery method, pseudo-phones were also generated. We detail\nhow the corpus was collected, cleaned and processed, and we illustrate its use\non zero-resource tasks by presenting some baseline results for the task of\nspeech-to-translation alignment and unsupervised word discovery. The dataset is\navailable online, aiming to encourage replicability and diversity in\ncomputational language documentation experiments.", + "authors": "Marcely Zanon Boito, Antonios Anastasopoulos, Marika Lekakou, Aline Villavicencio, Laurent Besacier", + "published": "2018-07-27", + "updated": "2018-07-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction For many low-resource and endangered languages, speech data is easier to obtain than textual data. Oral tradition has historically been the main medium for passing cultural knowledge from one generation to the next, and at least 43% of the world\u2019s languages are still unwritten [1]. Traditionally, documentary records of endangered languages are created by highly trained linguists in the \ufb01eld. However, modern technology has the potential to enable creation of much larger-scale (but lowerquality) resources. Recently proposed frameworks [2, 3] propose collection of bilingual audio, rendering the resource interpretable through translations. New technologies have been developed to facilitate collection of spoken translations [4] along with speech in an endangered language, and there already exist recent examples of parallel speech collection efforts focused on endangered languages [5, 6, 7]. The translation is usually in a high-resource language that functions as a lingua franca of the area, as it is common for members of an endangered-language community to be bilingual. Tackling the issue of the possible vanishing of more than 50% of the current spoken languages by the year 2100 [8], the Computational Language Documentation (CLD) \ufb01eld assembles two different communities: linguistics and informatics, proposing challenges [9, 10, 11] and frameworks from speech signal [12, 13, 14]. However, as the interest on CLD approaches grows, it becomes clear the urgent need of more publicly available low-resource corpora to provide replicable evaluation of the proposed methods. We are aware of only a few endangered languages whose corpora are publicly available [15, 16]. Our work is part of this effort to share resources, and with this paper we present a corpus on a truly endangered dialect from south Italy, Griko. The corpus has several levels of information (speech, machine extracted pseudo-phones, transcriptions, translations and sentence alignment), and we believe it can be an interesting resource for evaluating documentation techniques on (very) low-resource settings. In addition, we provide baseline results for two tasks: speech-to-translation alignment and unsupervised word discovery. We encourage the community to challenge these results by using their own techniques. For word discovery, we also provide the gold standard for evaluation following the Track 2 of the Zero Resource Challenge (ZRC) 2017 [11]. These metrics were extensively described in [17, 11], and are another important community effort for increasing reproducibility. This paper is organized as follows: after a quick related work (section 2), the Griko language is presented (section 3). Data processing methodology (section 4) and dataset are then presented. Our baseline systems and results for two tasks (sections 5 and 6) are \ufb01nally described. 2. Related Work Unsupervised Word Discovery (UWD) systems operate on unsegmented speech utterances and their goal is to output timestamps delimiting stretches of speech, associated with class labels, corresponding to real words in the language. This task is already considered in the Zero Resource Speech Challenge1 in a fully unsupervised setting: systems must learn to segment from a collection of raw speech signals only. Here, we investigate a slightly more favorable case where speech utterances are multilingually grounded (using cross-lingual supervision, where a written translation is available for each utterance). In CLD scenarios, this task helps to attenuate the heavy charge on \ufb01eld linguists: the output vocabulary can be used as a \ufb01rst clue of the lexicon present in the language of interest. As a monolingual setup, UWD was previously investigated from text input [18] and from speech [19, 20, 13, 21]. The speech translation problem has been traditionally approached by feeding the output of a speech recognition system into a Machine Translation (MT) system. Speech recognition uncertainty was integrated with MT by using speech output lattices as input to translation models [22, 23]. A sequence-tosequence model for speech translation without transcriptions has been introduced [24], but was only evaluated on align1http://zerospeech.com/2017 \fment. Synthesized speech data were translated in [25] using a model similar to the Listen Attend and Spell model [26], while a larger-scale study [27] used an end-to-end system for translating audio books between French and English. Sequence-tosequence models to both transcribe Spanish speech and translate it in English have also been proposed [28], by jointly training the two tasks in a multitask scenario with two decoders sharing the speech encoder. This model was further extended [29] with the translation decoder receiving information both from the speech encoder and the transcription decoder. For endangered languages (extremely low-resource settings) the lack of training data leads to the problem being framed as a sparse translation problem. This semi-supervised task lies between speech translation and keyword spotting, with cross-lingual supervision being used for word segmentation [30, 31, 32, 33]. Bilingual setups for word segmentation were discussed by [34, 35, 36, 37], but applied to speech transcripts (true phones). Among the most relevant to our approach are the works of [24] on speech-to-translation alignment using attentional Neural Machine Translation (NMT) and of [31, 32] for language documentation. However, the former does not address word segmentation and is not applied to a language documentation scenario, while the latter methods do not provide a full coverage of the speech corpus analyzed. A neural approach for word segmentation in documentation scenarios using the soft attention matrices (which we also use for our baseline experiments) was investigated in [37]. 3. The Griko Language Griko is a Greek dialect spoken in southern Italy, in the Greca Salentina area southeast of Lecce.2 There is another endangered Italo-Greek variety in southern Italy spoken in the region of Calabria, known as Grecanico or Greco. Both languages, jointly referred to as Italiot Greek, were included as seriously endangered in the UNESCO Red Book of Endangered Languages in 1999. Griko is only partially intelligible with modern Greek, and unlike other Greek dialects, it uses the Latin alphabet. In addition, it is rare among the Greek dialects, due to its retention of the in\ufb01nitive in particular syntactic contexts. Less than 20,000 people (mostly people over 60 years old) are believed to be native speakers [39, 40]; unfortunately, this number is quite likely an overestimation [41]. Resources in Griko are very scarce, with almost no corpora available for linguistic research. The \ufb01rst grammar of the language was composed by the German scholar Gerhard Rohlfs [42] to be followed by others [43]. Recently, a corpus of Griko narratives was released [44]: it contains 114 narratives originally collected by Vito Domenico Palumbo (1854\u20131928) the most noted Griko scholar [45, 46]. The narratives were further annotated with translations in Italian, and partly annotated with gold Part-of-Speech information. Here, we present and extend the only Griko speech corpus available online3 [47], consisting of about 20 minutes of speech in Griko, along with text translations into Italian. The original corpus (henceforth UoI corpus, as it is hosted at the University of Ioannina, Greece) consists of 330 mostly elicited utterances by nine native speakers, annotated with transcriptions, morphosyntactic tags, and glossed in Italian. 2A discussion on the possible origins of Griko can be found in [38]. 3http://griko.project.uoi.gr 4. Data Processing The original UoI corpus was collected during a \ufb01eld trip in Puglia, Italy by two linguists, with a particular focus on the use of in\ufb01nitive and verbal morphosyntax. The corpus contains utterances from 9 different speakers (5 male, 4 female) from the 4 villages (Calimera, Sternatia, Martano, Corigliano) where native speakers could still be found. The digitally collected audio \ufb01les (16-bit PCM, 44.1kHz, stereo) were manually segmented into utterances, transcribed, glossed in Italian, and annotated with extensive morphosyntactic tags by a trained linguist. 4.1. Annotation Extensions In order to render the UoI corpus useful for speech-related computational research on Griko, we extend the corpus with the following annotations: 1. Free-form Italian translations for every utterance, created by a bilingual speaker, 2. gold-standard word-level alignment information for every utterance, including annotated silences, 3. gold-standard speech-to-translation alignments, 4. pseudo-phones representation, obtained by using the acoustic unit discovery (AUD) method presented in [48], 5. ZRC gold standard for standard evaluation, described in the next section. Figure 1 shows an example of sentence pairs from our collection, and Table 1 presents some statistics on these aligned transcriptions and translations. We observe that both sides of the parallel corpus are considerably similar with respect to the metrics presented here (sentence structure and vocabulary). This is reasonable: the two languages belong to the same family and have been in contact for centuries. 4.2. A reference compatible with the ZRC metrics In addition to the word-level annotations, we built and make available a reference (in the format of the ZRC challenge) in order to allow evaluation of different word discovery approaches using this corpus. We had a manual alignment between speech and words, but no possibility to obtain an accurate automatic alignment between speech and phones (or graphemes) due to the very small amount of data available (not possible to train an acoustic model using a Kaldi pipeline on 330 signals, for instance). Thus, we used the word-level alignment information between speech and transcription, and the silence annotation available in our corpus, to approximate a speech-to-grapheme alignment. For each word present in the corpus, we retrieve its time window and segment this time window into smaller ones, giving to each existing grapheme an equal portion of its word time window. We manually corrected some of the silence and word annotations to ensure that we had no overlap between silence and words time windows. This approximation was necessary to make the ZRC metrics work. The \ufb01nal reference can be considered correct for evaluation of word discovery tasks (which do not take into account subword annotation), but should be consider with caution for evaluation of subword discovery tasks. Finally, we created two ZRC versions, one removing the silence tokens, used for grapheme evaluation, and a second one with all the information, used for pseudo-phones evaluation. \fGriko jat` \u0131 ` \u0131che polem` \u0131sonta ` oli tin addom` ada Italian perch\u00b4 e aveva lavorato tutta la settimana Figure 1: A tokenized and lower-cased sentence pair example in our Griko-Italian corpus. # tokens Vocabulary size Average tokens length Average # tokens per sentence Shortest token Largest token Griko 2,374 691 5.68 7.19 1 16 Italian 2,384 456 5.76 7.22 1 13 Table 1: Statistics of the 330 sentences in our parallel Griko-Italian corpus. Method P R F proportional 42.2 52.2 46.7 neural 24.6 30.0 27.0 DTW-EM 56.6 51.2 53.8 Table 2: On speech-to-translation alignment, the unsupervised model outperforms the neural attentional model and the naive baseline in terms of Precision and F-score. 5. Speech-to-Translation Alignment The task of speech-to-translation alignment is the problem of identifying portions in an audio segment that should be aligned to words in (text) translation, without access to transcriptions [24]. Our speech-to-translation alignment annotations allow us to evaluate such methods on our corpus. Evaluation is performed by computing standard precision, recall, and F-score on the links between speech frames and translation words. Providing a baseline for future work, Table 2 reiterates previous results on speech-to-translation alignment. We present results with three methods: a naive proportional baseline (proportional), a neural alignment model [24] (neural), and an unsupervised model (DTW-EM) [49]. The naive baseline assumes no reordering and simply segments the audio to as many segments as the translation words, each with a length proportional to the word\u2019s length in characters. The neural alignment model trains a speech-to-translation end-to-end sequence-tosequence system with attention on all the data, and then the soft attention matrices are converted to hard alignments between audio segments and translation words. DTW-EM is an unsupervised model that extends the IBM Model 2 alignment model [50] to work on speech segments, combining it with a Dynamic Time-Warping-based clustering approach [51]. Since the two languages have several similar characteristics, the naive proportional baseline is already very competitive; its recall is better than both other evaluated methods. The unsupervised model, however, achieves much higher Precision and F-score than the rest. Unsurprisingly, the neural approach performs signi\ufb01cantly worse in this setting: 330 sentences are clearly not enough to train a robust word-level model. 6. Unsupervised Word Discovery Experiments In this section we illustrate the use of our corpus for the task of unsupervised word discovery. We use three different baselines, one monolingual and two bilingual, and two different representation levels, graphemes (from text) and pseudo-phones (automatically extracted from speech). Evaluation is performed using the Boundary metric from the Zero Resource Challenge 2017 (Track 2) [11]. We compute recall, precision and F-score. Below, we describe the three baselines evaluated in this work. \u2022 Dpseg (monolingual): dpseg4 is the non-parametric bayesian model introduced in [52]. On this setup, words are generated by a bigram model over a non-\ufb01nity inventory, through the use of a Dirichlet-Process. Estimation is performed through Gibbs sampling. This approach is known as being very robust on low-resource scenarios. The hyper-parameters used here are the same from [53]. \u2022 Proportional Segmentation (bilingual): this baseline uses the word boundaries in the translation to segment the input proportionally. We can expect considerable good results for proportional segmentation when applied on language pairs similar on sentence structure and average token length, and therefore, we expect good results for this baseline when applied to the Griko-Italian corpus (see Table 1). \u2022 Neural Segmentation (bilingual): the method applied in this paper was presented in [37]. It post-processes a NMT system\u2019s soft-alignment probability matrices to generate hard segmentation. Due to the length discrepancy between the symbols (graphemes and pseudophones) and the translations, our post-processing included alignment smoothing. This procedure, proposed by [24], consists of adding temperature T to the softmax function used by the attention mechanism. Resulting soft-alignments matrices are further smoothed by averaging each probability by its right and left neighborhood. However, in this work we use T = 1 for all setups, and only the alignment matrices smoothing (averaging with the right and left neighbors) is used here. Also, for stability reasons, we report the averaged scores over 5 different trained models. \u2022 Merged Neural Segmentation (bilingual): the same methodology from the previous baseline, with the difference of averaging the soft-alignment probability matrices before post-processing, instead of averaging only the scores. We use the same 5 runs from the previous setup to generate an averaged (merged) segmentation. Table 3 presents the achieved results. Even on this very low-resource scenario, dpseg has a remarkable performance for 4Available at http://homepages.inf.ed.ac.uk/sgwater/resources.html. \fdpseg proportional neural merged neural P R F P R F P R F P R F grapheme 68.50 75.10 71.60 44.70 44.80 44.70 42.66 51.84 46.72 50.20 54.00 52.10 pseudo-phones 23.30 36.90 28.50 28.50 29.90 29.20 32.00 27.68 29.56 34.30 26.70 30.00 Table 3: Boundary scores for the task of unsupervised word segmentation. Results for neural segmentation are the average over 5 runs. Best results for each metric are presented in bold. grapheme pseudo-phones # tokens Vocabulary Size Average # tokens per sentence # tokens Vocabulary Size Average # tokens per sentence proportional 2,370 1,715 7.18 2,366 1,431 7.17 dpseg 2,629 567 7.97 3,912 520 11.85 neural (average) 2,972 1,462 9.01 1,929 1,066 5.84 merged neural 2,573 1,476 7.80 1,676 967 5.08 Table 4: A comparison between the generated segmentation by the four baselines. For the neural baseline, results are the arithmetic mean between the statistics for the 5 runs. the task of word segmentation working with graphemes. It retrieved 75.10% of the correct boundaries (recall). The second best method from the baselines for grapheme segmentation was the merged version of the neural segmentation. The remaining two baselines (proportional and neural) had close performance, achieving retrieval between 44 and 52%. For the pseudo-phones segmentation, all methods had a considerable drop in performance, specially dpseg. They all achieved similar F-scores, with the merged neural baseline being slightly more effective. Table 4 presents some numbers for the generated segmentation of all methods presented in this section. We observe that, for pseudo-phones, dpseg seems to oversegment the input (average tokens per sentence), while the neural baselines segmented the input considerably less. Lastly, pseudo-phones were obtained through an unsupervised unit discovery system, which inevitably adds noise to the representation. This noise is then propagated to the word discovery system. We believe the achieved results for pseudophones illustrate the dif\ufb01culty of the task of word discovery on extreme low-resource setups. 7." + }, + { + "url": "http://arxiv.org/abs/1709.05631v2", + "title": "Unwritten Languages Demand Attention Too! Word Discovery with Encoder-Decoder Models", + "abstract": "Word discovery is the task of extracting words from unsegmented text. In this\npaper we examine to what extent neural networks can be applied to this task in\na realistic unwritten language scenario, where only small corpora and limited\nannotations are available. We investigate two scenarios: one with no\nsupervision and another with limited supervision with access to the most\nfrequent words. Obtained results show that it is possible to retrieve at least\n27% of the gold standard vocabulary by training an encoder-decoder neural\nmachine translation system with only 5,157 sentences. This result is close to\nthose obtained with a task-specific Bayesian nonparametric model. Moreover, our\napproach has the advantage of generating translation alignments, which could be\nused to create a bilingual lexicon. As a future perspective, this approach is\nalso well suited to work directly from speech.", + "authors": "Marcely Zanon Boito, Alexandre Berard, Aline Villavicencio, Laurent Besacier", + "published": "2017-09-17", + "updated": "2017-09-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "INTRODUCTION Computational Language Documentation (CLD) aims at creating tools and methodologies to help automate the extraction of lexical, morphological and syntactic information in languages of interest. This paper focuses on languages (most of them endangered and unwritten) spoken in small communities all across the globe. Specialists believe that more than 50% of them will become extinct by the year 2100 [1], and manually documenting all these languages is not feasible. Initiatives for helping with this issue include organizing tasks [2, 3] and proposing pipelines for automatic information extraction from speech signals [4, 5, 6, 7, 8]. Methodologies for CLD should consider the nature of the collected data: endangered languages may lack a well-de\ufb01ned written form (they often are oral-tradition languages). Therefore, in the absence of a standard written form, one alternative is to align collected speech to its translation in a welldocumented language. Due to the challenge of \ufb01nding bilingual speakers to help in this documentation process, the collected corpora usually are of small size. One of the tasks involved in the documentation process is word segmentation. It consists of, given an unsegmented input, \ufb01nding the boundaries between word-like units. This input can be a sequence of characters or phonemes, or even raw speech. Such a system can be very useful to linguists, helping them start the transcription and documentation process. For instance, a linguist can use the output of such a system as an initial vocabulary, and then manually validate the generated words. Popular solutions for this task are Nonparametric Bayesian models [9, 10, 11, 12, 13] and, more recently, Neural Networks [5, 8, 14]. The latter have also been used for related tasks such as speech translation [15, 16] or unsupervised phoneme discovery [17]. Contribution. This paper is the \ufb01rst attempt to leverage attentional encoder-decoder models for language documentation of a truly unwritten language. We show that it is possible, from very little data, to perform unsupervised word discovery with a performance (F-score) only slightly lower than that of Nonparametric Bayesian models, known to perform very well on this task in limited data settings. Moreover, our approach aligns symbols in the unknown language with words from a known language which, as a by-product, bootstraps a bilingual dictionary. Therefore, in the remainder of this paper, we will use the term word discovery (instead of word segmentation), since our approach does not only \ufb01nd word boundaries but also aligns word segments to their translation in another language. Another reason why we are interested in attentional encoder-decoder models, is that they can easily be modi\ufb01ed to work directly from the speech signal, which is our ultimate goal. Approach. In a nutshell, we train an attention-based Neural Machine Translation (NMT) model, and extract the soft-alignment probability matrices generated by the attention mechanism. These alignments are then post-processed to segment a sequence of symbols (or speech features) in an unknown language (Mboshi) into words. We explore three improvements for our neural-based approach: alignment smoothing presented in [16], vocabulary reduction discussed in [18], and Moses-like symmetrization of our soft-alignment probability matrices. We also propose to reverse the translation direction, translating from known language words to arXiv:1709.05631v2 [cs.CL] 19 Sep 2017 \funknown language tokens. Lastly, we also study a semisupervised scenario, where prior knowledge is available, by providing the 100 most frequent words to the system. Outline. This paper is organized as follows: we present related work in Section 2, and the neural architecture, corpus, and our complete approach in Section 3. Experiments and their results are presented in Section 4 and 5, and are followed by an analysis in Section 6. We conclude our work with a discussion about possible future extensions in Section 7. 2. RELATED WORK Nonparametric Bayesian Models (NB models) [19, 20] are statistical approaches that can be used for word segmentation and morphological analysis. Recent variants of these models are able to work directly with raw speech [10], or with sentence-aligned translations [12]. The major advantage of NB models for CLD is their robustness to small training sets. Recently, [18] achieved their best results on a subset (1200 sentences) of the same corpus we use in this work by using a NB model. Using the dpseg system1 [9], they retrieved 23.1% of the total vocabulary (type recall), achieving a type F-score of 30.48%. Although NB models are well-established in the area of unsupervised word discovery, we wish to explore what neural-based approaches could add to the \ufb01eld. In particular, attention-based encoder-decoder approaches have been very successful in Machine Translation [21], and have shown promising results in End-to-End Speech Translation [15, 22] (translation from raw speech, without any intermediate transcription). This latter approach is especially interesting for language documentation, which often uses corpora made of audio recordings aligned with their translation in another language (no transcript in the source language). While attention probability matrices offer accurate information about word soft-alignments in NMT systems [21, 15], we investigate whether this is reproducible in scenarios with limited amounts of training data. That is because a notable drawback of neural-based models is their need of large amounts of training data [23]. We are aware of only one other work using an NMT system for unsupervised word discovery in a low-resource scenario. This work [16] used an 18,300 Spanish-English parallel corpus to emulate an endangered language corpus. Their approach for unsupervised word discovery is the most similar to ours. However, we go one step further: we apply such a technique to a real language documentation scenario. We work with only \ufb01ve thousand sentences in an unwritten African language (Mboshi), as we believe that this is more representative of what linguists may encounter when documenting languages. 1Available at http://homepages.inf.ed.ac.uk/sgwater/resources.html. # types #tokens avg # tokens per sentence Mboshi Dev 1,324 3,133 6.0 Mboshi Train 6,245 27,579 5.9 French Dev 1,343 4,321 8.2 French Train 4,903 38,226 8.4 Table 1: Organization of the corpus in development (Dev, 514 sentences) and training (Train, 4,643 sentences) sets for the neural model. 3. METHODOLOGY 3.1. Mboshi-French Parallel Corpus We use a 5,157 sentence parallel corpus in Mboshi (Bantu C25), an unwritten2 African language, aligned to French translations at the sentence level. Mboshi is a language spoken in Congo-Brazzaville, and it has 32 different phonemes (25 consonants and 7 vowels) and two tones (high and low). The corpus was recorded using the LIG-AIKUMA tool [24] in the scope of the BULB project [25]. For each sentence, we have a non-standard grapheme transcription (the gold standard for segmentation), an unsegmented version of this transcription, a translation in French, a lemmatization3 of this translation, and an audio \ufb01le. It is important to mention that in this work, we use Mboshi unsegmented non-standard grapheme form (close to language phonology) as a source while direct use of speech signal is left for future work. We split the corpus into training and development sets, using 10% for the latter. Table 1 gives a summary of the types (unique words) and tokens (total word counts) on each side of the parallel corpus. 3.2. Neural Architecture We use the LIG-CRIStAL NMT system4, using unsegmented text input for training. The model is easily extendable to work directly with speech [15]. Our NMT models follow [21]. A bidirectional encoder reads the input sequence x1, ..., xA and produces a sequence of encoder states h = h1, ..., hA \u2208 R2\u00d7n, where n is the chosen encoder cell size. A decoder uses its current state st and an attention mechanism to generate the next output symbol zt. At each time step t, the decoder computes a probability distribution over the target vocabulary. Then, it generates the symbol zt whose probability is the highest (it stops once it has generated a special end-ofsentence symbol). The decoder then updates its state st with the generated token zt. In our task, since reference transla2Even though it is unwritten, linguists provided a non-standard grapheme form, considered to be close to the language phonology. 3For tokenization and lemmatization we used TreeTagger [26]. 4Available at https://github.com/eske/seq2seq. \ftions are always available (even at test time), we always force feed previous ground-truth symbol wt instead of the generated symbol zt (teacher forcing). ct = attn(h, st\u22121) (1) yt = output(st\u22121 \u2295E(wt\u22121) \u2295ct) (2) zt = arg max yt (3) st = LSTM(st\u22121, E(wt) \u2295ct) (4) \u2295is the concatenation operator. s0 is initialized with the last state of the encoder (after a non-linear transformation), z0 = (special token), and E \u2208R|V |\u00d7n is the target embedding matrix. The output function uses a maxout layer, followed by a linear projection to the vocabulary size |V |. The attention function is de\ufb01ned as follows: ct = attn(h, st) = A X i=1 \u03b1t ihi (5) \u03b1t i = softmax(et i) (6) et i = vT tanh (W1hi + W2st + b2) (7) where v, W1, W2, and b2 are learned jointly with the other parameters of the model. At each time step (t) a score et i is computed for each encoder state hi, using the current decoder state st. These scores are then normalized using a softmax function, thus giving a probability distribution over the input sequence PA i=1 \u03b1t i = 1 and \u2200i, 0 \u2264\u03b1t i \u22641. The context vector ct used by the decoder, is a weighted sum of the encoder states. This can be understood as a summary of the useful information in the input sequence for the generation of the next output symbol zt. The weights \u03b1t i can be seen as a soft-alignment between input xi and output zt. Our models are trained using the Adam algorithm, with a learning rate of 0.001 and batch size (N) of 32. We minimize a cross-entropy loss between the output probability distribution pt = softmax(yt) and reference translation wt: L = 1 N N X i=1 loss(si = w1, ..., wT | xi) (8) loss(w1, .., .wT | xi) = \u2212 T X t |V | X j log ptj \u00d7 1(wt = Vj) (9) ptj = eytj P|V | k eytk (10) 3.3. Neural Word Discovery Approach Our full word discovery pipeline is illustrated in Figure 1. We start by training an NMT system using the Mboshi-French Fig. 1: Neural word discovery pipeline. parallel corpus, without the word boundaries on the Mboshi side. This is shown as step 1 in the \ufb01gure. We stop training once the training loss stops decreasing. At this point, we expect the alignment model to be the most accurate on the training data. Then we ask the model to forcedecode the entire training set. We extract soft-alignment probability matrices computed by the attention model while decoding (step 2). Finally, we post-process this soft-alignment information and infer a word segmentation (step 3). We \ufb01rst transform the soft-alignment into a hard-alignment, by aligning each source symbol xi with target word wt such that: t = arg maxi \u03b1t i. Then we segment the input (Mboshi) sequence according to these hard-alignments: if two consecutive symbols are aligned with the same French word, they are considered to belong to the same Mboshi word. 4. UNSUPERVISED WORD DISCOVERY EXPERIMENTS For the unsupervised word discovery experiments, we used the unsegmented transcription in Mboshi provided by linguists, aligned with French sentences. This Mboshi unsegmented transcription is made of 44 different symbols. We experimented with the following variations: 1. Alignment Smoothing: to deal with source (phones or graphemes) vs. target (words) sequence length discrepancy, we need to encourage many-to-one alignments between Mboshi and French. These alignments are needed in order to cluster Mboshi symbols into word-units. For this purpose, we implemented the alignment smoothing proposed by [16]. The softmax function used by the attention mechanism (see eq. 6) takes an additional temperature parameter: \u03b1t i = exp (et i/T)/ P j exp (et j/T) A temperature T greater than one5 will result in a less sharp softmax, which boosts many-to-one alignments. In addition, 5We use T = 10, like the original paper [16]. \fTOKENS TYPES Recall Precision F-score Recall Precision F-score Base Model (Mb-Fr) 7.16 4.50 5.53 12.85 6.41 8.55 Base Model (Mb-Fr) with Alignment Smoothing 6.82 5.85 6.30 15.00 6.76 9.32 Reverse Model (Fr-Mb) 20.04 10.02 13.36 18.62 14.80 16.49 Reverse Model (Fr-Mb) with Alignment Smoothing 21.44 16.49 18.64 27.23 15.02 19.36 Table 2: Unsupervised Word Discovery results with 4,643 sentences. the probabilities are smoothed by averaging each score with the scores of the two neighboring words: \u03b1t i \u2190(\u03b1t i\u22121 + \u03b1t i + \u03b1t i+1)/3 (equivalent to a low-pass \ufb01ltering on the soft-alignment probability matrix). 2. Reverse Architecture: in NMT, the soft-alignments are created by forcing the probabilities for each target word t to sum to one (i.e. P i \u03b1t i = 1). However, there is no similar constraint for the source symbols, as discussed in [16]. Because we are more interested in the alignment than the translation itself, we propose to reverse the architecture. The reverse model translates from French words to Mboshi symbols. This prevents the attention model from ignoring some Mboshi symbols. 3. Alignment Fusion: statistical machine translation systems, such as the Moses [27], extract alignments in both directions (source-to-target and target-to-source) and then merge them, creating the \ufb01nal translation model. This alignment fusion is often called symmetrization. We investigate whether this Moses-like symmetrization improves our results by merging the soft-alignments probability matrices generated by our base (Mboshi-French) and reverse (French-Mboshi) models. We replace each probability \u03b1t i by 1 2(\u03b1t i + \u03b2i t), where \u03b2i t is the probability for the same alignment i \u2194t in the reverse architecture. 4. Target Language Vocabulary Reduction: to reduce vocabulary size on the known language, we replace French words by their lemmas. The intuition is that, by simplifying the translation information, the model could more easily learn relations between the two languages. For the task of unsupervised word discovery, this technique was recently investigated by [18]. The base model (Mboshi to French) uses an embedding size and cell size of 12. The encoder stacks two bidirectional LSTM layers, and the decoder uses a single LSTM layer. The reverse model (French to Mboshi) uses an embedding size and cell size of 64, with a single layer bidirectional encoder and single layer decoder. We present in Table 2 the unsupervised word discovery task results obtained with our base model, and with the reverse model, with and without alignment smoothing (items 1 and 2). We notice that the alignment smoothing technique presented by [16] improved the results, especially for types. Moreover, we show that the proposed reverse model considerably improves type and token retrieval. This seems to con\ufb01rm the hypothesis that reversing the alignment direction results in a better segmentation (because the attention model has to align each Mboshi symbol to French words with a total probability of 1). This may also be due to the fact that the reverse model reads words and outputs character-like symbols which is generally easier than reading sequences of characters [28]. Finally, we achieved our best result by using the reverse model with alignment smoothing (last row in Table 2). We then used this latter model for testing alignment fusion and vocabulary reduction (items 3 and 4). For alignment fusion, we tested three con\ufb01gurations using matrices generated by the base and reverse models. We tested the fusion of the raw soft-alignment probability matrices (without alignment smoothing), the fusion of already smoothed matrices, as well as this latter fusion followed by a second step of smoothing. All these con\ufb01gurations lead to negative results: recall reduction between 3% and 5% for tokens and between 1% and 9% for types. We believe this happens because by averaging the reverse model\u2019s alignments with the ones produced by the base model (which does not have the constraint of using all the symbols) we degrade the generated alignments, more than exploiting information discovered in both directions. Lastly, when running the reverse architecture (with alignment smoothing) using French lemmas (vocabulary reduction), we also noticed a reduction in performance. The lemmatized model version had a recall drop of approximately 2% for all tokens and types metrics. We believe this result could be due to the nature of the Mboshi language, and not necessarily a generalizable result. Mboshi has a rich morphology, creating a different word for each verb tense, which includes radical and all tense information. Therefore, by removing this from the French translations, we may actually make the task harder, since the system is forced to learn to align different words in Mboshi to the same word in French. \fUnsupervised Semi-supervised Recall 27.23 29.49 Precision 15.02 24.64 F-score 19.36 26.85 # correct types 1,692 1,842 # generated types 11,266 7,473 Table 3: Types results for the semi-supervised word discovery task (100 known words, 4.653 sentences). 5. SEMI-SUPERVISED WORD DISCOVERY EXPERIMENTS A language documentation task is rarely totally unsupervised, since linguists usually immerse themselves in the community when documenting its language. In this section, we explore a semi-supervised approach for word segmentation, using our best reverse model from Section 4. To emulate prior knowledge, we select the 100 most frequent words in the gold standard for Mboshi segmentation. We consider this amount reasonable for representing the information a linguist could acquire after a few days. Our intuition is that providing the segmentation for these words could help improve the performance of the system for the rest of the vocabulary. To incorporate this prior information to our system, we simply add known tokens on the Mboshi side of the corpus, keeping the remaining symbols unsegmented. This creates a mixed representation, in which the Mboshi input has at the same time unsegmented symbols and segmented words. Since languages follow Zip\ufb01an distributions [29] and we are giving to the model the most frequent words in the corpus, analysis is not done in terms of tokens, since this would be over-optimistic and bias the model evaluation, but only in terms of types. Results are presented in Table 3. For types, we observed an increase of 2.4% in recall. This is not a huge improvement, considering that we are giving 100 words to the model. We discovered that our unsupervised model was already able to discover 97 of these 100 frequent words, which could justify the small performance difference between the models. In addition to the 100 types already known, the semi-supervised model found 50 new types that the unsupervised system was unable to discover. Finally, it is interesting to notice that, while the performance increase is not huge, the semi-supervised system reduced considerably the number of types generated, from 11,266 to 7,473. This suggests that this additional information helped the model to create a better vocabulary representation, closer to the gold standard vocabulary. Recall Precision F-score \u03c3 Reverse Model (Fr-Mb) with AS 27.23 15.02 19.36 0.032 dpseg 13.94 38.32 20.45 0.272 Table 4: Comparison between the NB model (dpseg) and the reverse model with alignment smoothing (AS) for unsupervised word discovery. The scores were obtained by averaging over three instances of each model. Fig. 2: Word frequency distribution of the three models and the gold standard distribution. 6. ANALYSIS 6.1. Baseline Comparison As a baseline, we used dpseg [30, 31] which implements a Nonparametric Bayesian approach, where (pseudo)-words are generated by a bigram model over a non-\ufb01nite inventory, through the use of a Dirichlet-Process. We used the same hyper-parameters as [18], which were tuned on a larger English corpus and then successfully applied to the segmentation of Mboshi. We use a random initialization and 19,600 sampling iterations. Table 4 shows our results for types compared to the NB model. Although the former is able to retrieve more from the vocabulary, the latter has higher precision, and both are close in terms of F-score. Additionally, ours has the advantage of providing clues for translation. It is interesting to notice that our neural approach, which is not specialized for this task (the soft-alignment scores are only a by-product of translation), was able to achieve close performance to the dpseg method, which is known to be very good in low-resource scenarios. This highlights the potential of our approach for language documentation. 6.2. Vocabulary Analysis To understand the segmentation behavior of our approach, we looked at the generated vocabulary. We compare our unsupervised and semi-supervised methods with the gold standard and the NB baseline, dpseg. The \ufb01rst characteristic \fFig. 3: Type length distribution of the gold standard, dpseg and our unsupervised and semi-supervised methods. we looked at was the word distribution of the generated vocabularies. While we already knew that dpseg constraints the generated vocabulary to follow a power law, we observed that our approaches also display such a behavior. They produce curves that are as close to the real language distribution as dpseg (see Figure 2). We also measured the average word length to identify under-segmentation and over-segmentation. To be able to compare vocabularies of varying sizes, we normalized the frequencies by the total number of generated types. The curves are shown in Figure 3. Reading the legend from left to right, the vocabulary sizes are 6,245, 2,285, 11,266, and 7,473. Our semi-supervised con\ufb01guration is the closest to the real vocabulary in terms of vocabulary size, with only 1,228 more types. All the approaches (including dpseg) oversegment the input in a similar way, creating vocabularies with average word length of four (Figure 3). Since both dpseg and neural-based approaches suffer from the same over-segmentation problem, we believe that this is a consequence of the corpus used for training, and not necessarily a general characteristic of our approach in lowresource scenarios. For our neural approaches, another justi\ufb01cation is the corpus being small, and the average tokens per sentence being higher at the French side (shown in Table 1), which can potentially disperse the alignments over the possible translations, creating multiple boundaries. Moreover, as Mboshi is an agglutinative language, there were several cases in which we had a good alignment but wrong segmentation. An example is shown in Figure 4, where we see that the word \u201c\u00b4 \u0131mok\u00b4 \u03c9s\u00b4 \u03c9\u201d was split in two words in order to keep its alignment to both parts of its French translation \u201csuis bless\u00b4 e\u201d. This is also the case of the last word in this \ufb01gure: Mboshi does not require articles preceding nouns, which caused misalignment. We believe that by exploiting translation alignment, we could constraint our segmentation procedure, creating a more accurate word discovery model. Finally, we were able to create a model of reasonable quality which gives segmentation and alignment information using only 5,157 sentences for training (low-resource scenario). Fig. 4: Example of soft-alignment generated by our unsupervised word discovery model. The darker the square, the higher is the probability for the source-target pair. Our segmentation was \u201cng\u00b4 a \u00b4 \u0131mo k\u00b4 \u03c9s\u00b4 \u03c9 m\u2019 \u00b4 e b\u00b4 \u03c9li\u201d, while the correct one is \u201cng\u00b4 a \u00b4 \u0131mok\u00b4 \u03c9s\u00b4 \u03c9 m\u2019 \u00b4 eb\u00b4 \u03c9li\u201d. 7." + } + ], + "Vassilina Nikoulina": [ + { + "url": "http://arxiv.org/abs/2103.01819v2", + "title": "The Rediscovery Hypothesis: Language Models Need to Meet Linguistics", + "abstract": "There is an ongoing debate in the NLP community whether modern language\nmodels contain linguistic knowledge, recovered through so-called probes. In\nthis paper, we study whether linguistic knowledge is a necessary condition for\nthe good performance of modern language models, which we call the\n\\textit{rediscovery hypothesis}. In the first place, we show that language\nmodels that are significantly compressed but perform well on their pretraining\nobjectives retain good scores when probed for linguistic structures. This\nresult supports the rediscovery hypothesis and leads to the second contribution\nof our paper: an information-theoretic framework that relates language modeling\nobjectives with linguistic information. This framework also provides a metric\nto measure the impact of linguistic information on the word prediction task. We\nreinforce our analytical results with various experiments, both on synthetic\nand on real NLP tasks in English.", + "authors": "Vassilina Nikoulina, Maxat Tezekbayev, Nuradil Kozhakhmet, Madina Babazhanova, Matthias Gall\u00e9, Zhenisbek Assylbekov", + "published": "2021-03-02", + "updated": "2022-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Vector representations of words obtained from self-supervised pretraining of neural language models (LMs) on massive unlabeled data have revolutionized NLP in the last decade. This success has spurred an interest in trying to understand what type of knowledge these models actually learn (Rogers et al., 2020; Petroni et al., 2019a). Of particular interest here is \u201clinguistic knowledge\u201d, which is generally measured by annotating test sets through experts (linguists) following certain pre-de\ufb01ned linguistic schemas. These annotation schemas are based on language regularities manually de\ufb01ned by linguists. On the other side, we have language models which are pretrained predictors that assign a probability to the presence of a token given the surrounding context. Neural models solve this task by \ufb01nding patterns in the text. We refer to the claim that such neural language models rediscover linguistic knowledge as the rediscovery hypothesis. It stipulates that the \u00a92021 AI Access Foundation. All rights reserved. arXiv:2103.01819v2 [cs.CL] 3 Jan 2022 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov patterns of the language discovered by the model trying to solve the pretraining task correlate with the human-de\ufb01ned linguistic regularity. In this work we measure the amount of linguistics rediscovered by a pretrained model through the so-called probing tasks: hidden layers of a neural LM are fed to a simple classi\ufb01er\u2014a probe\u2014that learns to predict a linguistic structure of interest (Ettinger et al., 2016; Adi et al., 2017; Conneau et al., 2018; Hewitt & Manning, 2019; Tenney et al., 2019a). In the \ufb01rst part of this paper (Section 2) we attempt to challenge the rediscovery hypothesis through a variety of experiments to understand to what extent it holds. Those experiments aim to verify whether the path through language regularity is indeed the one taken by pretrained LMs or whether there is another way to reach good LM performance without rediscovering linguistic structure. Our experiments show that pretraining loss is indeed tightly linked to the amount of linguistic structure discovered by an LM. We, therefore, fail to reject the rediscovery hypothesis. This negative attempt, as well as the abundance of positive examples in the literature, motivates us to prove mathematically the rediscovery hypothesis. In the second part of our paper (Section 3) we use information theory to prove the contrapositive of the hypothesis\u2014 removal of linguistic information from an LM degrades its performance. Moreover, we show that the decline in the LM quality depends on how strongly the removed property is interdependent with the underlying text: a greater dependence leads to a greater drop. We con\ufb01rm this result empirically, both with synthetic data and with real annotations on English text. The result that removing information that contains strong mutual information with the underlying text degrades (masked) word prediction might not seem surprising a posteriori. However, it is this surprise that lies at the heart of most of the work in recent years around the discovery of how easily this information can be extracted from intermediate representations. Our framework also provides a coe\ufb03cient that measures the dependence between a probing task and the underlying text. This measure can be used to determine more complex probing tasks, whose rediscovery by language models would indeed be surprising. 2. Do Pretraining Objectives Correlate with Linguistic Structures? The \ufb01rst question we pose in this work is the following: is the rediscovery of linguistic knowledge mandatory for models that perform well on their pretraining tasks, typically language modeling or translation; or is it a side e\ufb00ect of overparameterization?1 We analyze the correlation between linguistic knowledge and LM performance with pruned pretrained models. By compressing a network through pruning we retain the same overall architecture and can compare probing methods. More important, we hypothesize that pruning removes all unnecessary information with respect to the pruning objective (language modeling) and that it might be that information that is used to rediscover linguistic knowledge. To complete this experiment, we also track the pruning e\ufb03ciency during pretraining. 1. Overparamterization is de\ufb01ned informally as \u201chaving more parameters than can be estimated from the data\u201d, and therefore using a model richer than necessary for the task at hand. Those additional parameters could be responsible for the good performance of the probes. 1344 \fLanguage Modeling and Linguistic Structures 2.1 Pruning Method The lottery ticket hypothesis (LTH) of Frankle and Carbin (2019) claims that a randomlyinitialized neural network f(x; \u03b8) with trainable parameters \u03b8 \u2208Rn contains subnetworks f(x; m \u2299\u03b8), m \u2208{0, 1}n, such that, when trained in isolation, they can match the performance of the original network.2 The authors suggest a simple procedure for identifying such subnetworks (Alg. 2.1). When this procedure is applied iteratively (Step 5 of Alg. 2.1), we get a sequence of pruned models {f(x; mi \u2299\u03b80)} in which each model has fewer parameters than its predecessor: \u2225mi\u22250 < \u2225mi\u22121\u22250. Frankle and Carbin used iterative pruning for image classi\ufb01cation models and found subnetworks that were 10%\u201320% of the sizes of the original networks and met or exceeded their validation accuracies. Such a compression approach retains weights important for the main task while discarding others. We hypothesize that it might be those additional weights that contain the signals used by probes. But, if the subnetworks retain linguistic knowledge then this is evidence in favor of the rediscovery hypothesis. Algorithm 2.1: Lottery ticket hypothesis\u2014Identifying winning tickets (Frankle & Carbin, 2019) 1 Randomly initialize a neural network f(x; \u03b80), \u03b80 \u2208Rn 2 Train the network for j iterations, arriving at parameters \u03b8j. 3 Prune p% of the parameters in \u03b8j, creating a mask m \u2208{0, 1}n. 4 Reset the remaining parameters to their values in \u03b80, creating the winning ticket f(x; m \u2299\u03b80). 5 Repeat from 2 if performing iterative pruning. 6 Train the winning ticket f(x; m \u2299\u03b80) to convergence. 2.2 Models We explore one static embedding model, SGNS, and two contextualized embedding models, CoVe and RoBERTa. In this manner, we cover the full spectrum of modern representations of words, from static embeddings to shallow and deep contextual embeddings. CoVe (McCann et al., 2017) uses the top-level activations of a two-layer BiLSTM encoder from an attentional sequence-to-sequence model (Bahdanau et al., 2015) trained for English-to-German translation. The authors used the CommonCrawl-840B GloVe model (Pennington et al., 2014) for English word vectors, which were completely \ufb01xed during pretraining, and we follow their setup. This entails that the embedding layer on the source side is not pruned during the LTH procedure. We also concatenate the encoder output with the GloVe embeddings as is done in the original paper (McCann et al., 2017, Eq. 6). BERT (Devlin et al., 2019) is a deep Transformer (Vaswani et al., 2017) encoder that has become the de facto standard when it comes to contextualized embeddings. We pretrain the RoBERTa variant of the BERT model (Liu et al., 2019). RoBERTa stands for robustly optimized BERT which was trained with hyperparameters optimized for convergence, with 2. \u2225x\u22250 is a number of nozero elements in x \u2208Rd, \u2299is the element-wise multiplication. 1345 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov dynamic masking instead of static masking, and restricted to masked LM objective only.3 Unlike their predecessors, such as CoVe and ELMo (Peters et al., 2018), BERT and other Transformer-based encoders are considered deep contextualizers. Word2vec SGNS (Mikolov et al., 2013b) is a shallow two-layer neural network that produces uncontextualized word embeddings. It is widely accepted that the SGNS vectors capture words semantics to a certain extent, which is con\ufb01rmed by the folklore examples, such as wking \u2212wman + wwoman \u2248wqueen. So, the question of the relationship between the SGNS objective function and its ability to discover linguistics is also relevant. 2.3 Measuring the Amount of Linguistic Knowledge To properly measure the amount of linguistic knowledge in word vectors we de\ufb01ne it as the performance of classi\ufb01ers (probes) that take those vectors as input and are trained on linguistically annotated data. This de\ufb01nition has the advantage of being able to be measured exactly, at the cost of avoiding the discussion of whether POS tags or syntactic parse trees indeed denote linguistic knowledge captured by humans in their learning process. We should note that this probing approach has received a lot of criticism recently (Hewitt & Liang, 2019; Pimentel et al., 2020b; Voita & Titov, 2020) due to its inability to distinguish between information encoded in the pretrained vectors from the information learned by the probing classi\ufb01er. However, in our study, the question is not How much linguistics is encoded in the presentation vector?, but rather Does one vector contain more linguistic information than the other? We compare di\ufb00erent representations of the same dimensionality using probing classi\ufb01ers of the same capacity. Even if part of the probing performance is due to the classi\ufb01er itself we claim that the di\ufb00erence in the probing performance will be due to the di\ufb00erence in the amount of linguistic knowledge encoded in the representations we manipulate. This conjecture is strengthened by the \ufb01ndings of Zhang et al. (2020) who analyzed the representations from pretrained miniBERTas4 and demonstrated that the trends found through edge probing (Tenney et al., 2019b) are the same as those found through better-designed probes such as Minimum Description Length (Voita & Titov, 2020). Therefore in our work we adopt edge probing and structural probing for contextualized embeddings. For static embeddings, we use the traditional word similarity and word analogy tasks. Edge probing (Tenney et al., 2019b) formulates several linguistics tasks of di\ufb00erent nature as text span classi\ufb01cation tasks. The probing model is a lightweight classi\ufb01er on top of the pretrained representations trained to solve those linguistic tasks. In our study, we use the part-of-speech tagging (POS), constituent labeling, named entity labeling (NE), and semantic role labeling (SRL) tasks from the suite, in which a probing classi\ufb01er receives a sequence of tokens and predicts a label for it. For example, in the case of constituent labeling, for a sentence This probe [discovers linguistic knowledge], the sequence in square brackets should be labeled as a verb phrase. Structural probing (Hewitt & Manning, 2019) evaluates whether syntax trees are embedded in a linear transformation of a neural network\u2019s word representation space. The 3. BERT has next sentence prediction loss in addition. 4. RoBERTa models trained on a varied amount of training data 1346 \fLanguage Modeling and Linguistic Structures probe identi\ufb01es a linear transformation under which squared Euclidean distance encodes the distance between words in the parse tree. Hewitt and Manning show that such transformations exist for both ELMo and BERT but not in static baselines, providing evidence that entire syntax trees can be easily extracted from the vector geometry of deep models. Word similarity (Finkelstein et al., 2002) and word analogy (Mikolov et al., 2013c) tasks can be considered as nonparametric probes of static embeddings, and\u2014 di\ufb00erently from the other probing tasks\u2014are not learned. The use of word embeddings in the word similarity task has been criticized for the instability of the results obtained (Antoniak & Mimno, 2018). Regarding the word analogy task, Schluter (2018) raised concerns on the misalignment of assumptions in generating and testing word embeddings. However, the success of the static embeddings in performing well in these tasks was a crucial part of their widespread adoption. 2.4 Experimental Setup We prune the embedding models from Section 2.2 with the LTH algorithm (Alg. 2.1) and evaluate them with probes from Section 2.3 at each pruning iteration. CoVe and SGNS are pruned iteratively, while for RoBERTa we perform one-shot pruning at di\ufb00erent rates.5 Assuming that \u2113\u03c9 i is the validation loss of the embedding model \u03c9 \u2208{CoVe, RoBERTa, SGNS} at iteration i, and \u2206s\u03c9,T i := s\u03c9,T i \u2212s\u03c9,T 0 is the drop in the corresponding score on the probing task T \u2208{NE, POS, Const., Struct., Sim., Analogy} compared to the score s\u03c9,T 0 of the baseline (unpruned) model, we obtain pairs (\u2113\u03c9 i , \u2206s\u03c9,T i ) for further analysis. Keep in mind that RoBERTa and SGNS are pruned in full, while in CoVe we prune everything except the source-side embedding layer. This exception is due to the design of the CoVe model, and we follow the original paper\u2019s setup (McCann et al., 2017). Software and datasets. We pretrain CoVe on the English\u2013German part of the IWSLT 2016 machine translation task (Cettolo et al., 2016) using the OpenNMT-py toolkit (Klein et al., 2017). RoBERTa is pretrained on the WikiText-103 dataset (English) (Merity et al., 2017) using the fairseq toolkit (Ott et al., 2019b) with default training settings (Ott et al., 2019a). Finally, the SGNS model is pretrained on the text8 data (English as well) (Mahoney, 2011) using our custom implementation (Assylbekov, 2020). The edge probing classi\ufb01er is trained on the standard benchmark dataset OntoNotes 5.0 (Weischedel et al., 2013) using the jiant toolkit (Pruksachatkun et al., 2020). The structural probe is trained on the English UD (Silveira et al., 2014) using the code from the authors (Hewitt, 2019). For word similarities we use the WordSim353 dataset (Finkelstein et al., 2002), while for word analogies we use the Google dataset (Mikolov et al., 2013a). All those datasets are in English. Optimization is performed in almost the same way as in the original works on CoVe, RoBERTa, and SGNS. See Appendix A for details. 5. This is done to speedup experiments on RoBERTa, as one-shot pruning at di\ufb00erent rates can be run in parallel. 1347 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov 2.5 Results First, we note that the lottery ticket hypothesis is con\ufb01rmed for the embedding models since pruning up to 60% weights does not harm their performance signi\ufb01cantly on held-out data (Fig. 1).6 Before proceeding to the results of probing of pruned models, we want to note that the probing scores of the baseline (unpruned) models obtained by us are close to the scores from the original papers (Tenney et al., 2019b; Hewitt & Manning, 2019), and are shown in Table 1. Model Task NE POS Const. Struct. CoVe .921 .936 .808 .726 RoBERTa .932 .951 .808 .719 Model Task Similarity Analogy SGNS .716 .332 Table 1: Probing scores for the baseline (unpruned) models. We report the micro-averaged F1 score for the POS, NE, and Constituents; undirected unlabeled attachment score (UUAS) for the structural probe; Spearman\u2019s correlation with the human ratings for the similarity task; and accuracy for the analogy task. Probing results are provided in Fig. 2 and 3, where we scatter-plot validation loss \u2113\u03c9 i vs drop in probing performance \u2206s\u03c9,T i for each of the model-probe combinations.7 First, we note that, in most cases, the probing score correlates with the pretraining loss, which supports the rediscovery hypothesis. We note that the probing score decreases slower for some tasks (e.g., POS tagging), but is much steeper for others (e.g., constituents labeling). This is complementary to the \ufb01ndings of Zhang et al. (2020) who showed that the syntactic learning curve reaches plateau performance with less pretraining data while solving semantic tasks requires more training data. Our results suggest that similar behavior emerges with respect to the model size: simpler tasks (e.g., POS tagging) can be solved with smaller models, while more complex linguistic tasks (e.g., syntactic constituents or dependency parsing) require bigger model size. In addition, we note that in the case of CoVe, and in contrast to RoBERTa, the probing scores for more local (i.e. less context-dependent) tasks such as POS and NER hardly decrease with an increase in the pruning rate. We believe that this is because CoVe representations by default contain unpruned static GloVe embeddings, which by themselves already obtain good performance on the more local tasks. The reader might have noted that in the case of CoVe in Fig. 2 when the range of \u2113i\u2019s is restricted to low values, there is a lack of correlation between \u2113i and \u2206si. We discuss this further in Appendix B as we argue that this does not contradict our main \ufb01nding. 6. In the case of SGNS, pruning up to 80% of weights does not a\ufb00ect its validation loss. Since solving SGNS objective is essentially a factorization of the pointwise mutual information matrix, in the form PMI \u2212log k \u2248WC (Levy & Goldberg, 2014), this means that a factorization with sparse W and C is possible. This observation complements the \ufb01ndings of Tissier et al. (2019) who showed that near-tooptimal factorization is possible with binary W and C. 7. Note, that cross-entropy values of CoVe and RoBERTa are not comparable as these are over di\ufb00erent corpora, languages and vocabularies. 1348 \fLanguage Modeling and Linguistic Structures Figure 1: Results of applying the LTH (Algorithm 2.1) to CoVe, RoBERTa, and SGNS. In each case we scatter-plot the percentage of remained weights vs validation loss, which is cross entropy for CoVe and RoBERTa, and a variant of negative sampling objective for SGNS. Figure 2: Probing results for the CoVe and RoBERTa embeddings. Horizontal axes indicate validation loss values, which are cross-entropy values. Vertical axes indicate drops in probing performances. In case of edge probing, we use the NE, POS, and constituent labeling tasks from the suite of Tenney et al. (2019b) and report the drop in micro-averaged F1 score compared to the baseline (unpruned) model. In case of structural probing, we use the distance probe of Hewitt and Manning (2019) and report the drop in undirected unlabeled attachment score (UUAS). Figure 3: Similarity and analogy results for the SGNS embeddings. For similarities we report the drop in Spearman\u2019s correlation with the human ratings and for analogies in accuracy. 1349 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov Figure 4: Breakdown of POS tagging accuracy decrease by token frequency. We report the drop in accuracy compared to the baseline (unpruned) model. Left: all bins (15) have comparable cumulative counts in the pretraining data. The distribution of tokens over the bins: bin 1 \u2014 5 tokens, bin 2 \u2014 25 tokens, bin 3 \u2014 368 tokens, bin 4 \u2014 2813 tokens, bin 5 \u2014 600K tokens. Right: 4 bins of tokens which are (1) frequent both in pretraining and POS training data (153 tokens), (2) rare in pretraining but frequent in POS training data (66 tokens), (3) frequent in pretraining but rare in POS training data (235 tokens), and (4) rare everywhere (46K tokens). Breakdown by token frequency. The pervasiveness of Zipf\u2019s law in language has the consequence that many conclusions over aggregated scores can be attributed to e\ufb00ects on a small set of tokens. Indeed, when binning tokens by their frequency in the pretraining data and analyzing the POS accuracy drop per bin it becomes obvious that the less frequent a token is the more pruning a\ufb00ects its POS-tag prediction. This is shown in Fig. 4 (left) where tokens are binned in 5 equally-sized groups. A straightforward interpretation of such behavior would be: (1) pruning degrades the representations for rarer tokens in the \ufb01rst place. It is also possible, that (2) pruning degrades all token representations similarly; however, the probing model has the capacity to recover the POS performance for tokens that are frequent in POS tagging corpus but cannot do that for rare tokens. The \ufb01rst reason would support the claim that pruning removes the memorization8 for rare tokens, while the second reason would mean that the pruned model redistributes its representation power across all the token groups to preserve good LM performance. The latter behavior would be more aligned with the rediscovery hypothesis. To better understand which of the two is a better explanation we need to distinguish between tokens which are rare in the pretraining data but are frequent in the downstream POS training data and vice-versa: the behavior on tokens which are rare in POS training data will be more informative of how pruning a\ufb00ects pretrained representations. 8. To do well on the word predictions task the language model can either memorize patterns or learn certain \u201clanguage regularity\u201d (aka generalization). If we do an analogy with linguistic rules: there are cases when the linguistic rules (\u223cgeneralization) can be applied, and there are exceptions. The generalization has its limits, and at some point, the language model needs to memorize to boost the performance. 1350 \fLanguage Modeling and Linguistic Structures Figure 5: Probing intermediate checkpoints of RoBERTa. Top: RoBERTa validation CE loss during training. Bottom: Di\ufb00erence between the baseline probing score10 and probing score at each epoch (up to 200). In case of edge probing, we use the SRL, NE, POS, and constituent labeling tasks from the suite of Tenney et al. (2019b) and report the micro-averaged F1 score. In case of structural probing, we use the distance probe of Hewitt and Manning (2019) and report the undirected unlabeled attachment score (UUAS). For this, we group the \ufb01rst three bins together in a freq-MLM group and the two last bins inside rare-MLM; and do a similar split based on the frequency of the tokens in the POS training corpus (freq-POS and rare-POS). The impact of pruning on the four possible combinations of groups is shown in Fig. 4 (right). The probing performance on the tokens which are frequent in POS tagging corpus (freq and rare-MLM/freq-POS) keep almost constant up to a pruning rate of 40%. While tokens which are rare in POS tagging corpus (freq-MLM/rare-POS and rare) seem to su\ufb00er more with pruning rate growth. This lends support to the possibility that the amount of linguistics contained in the pretrained vectors decreases across all the token groups. Finally, we note that up to 60% of pruning both freq-MLM/rare-PoS and rare groups behave similarly but at 70% pruning rate the tokens that are rare everywhere (rare) su\ufb00er from a higher drop, compared to freq-MLM/rare-PoS meaning that probing model is not able to recover the correct PoS tags for rare tokens. This suggests that memorization gets removed from the pretrained model at a higher pruning rate, but it is exploited by the probing model at lower pruning rates. 10. Baseline score corresponds to the probing with best checkpoint (around 450 epochs of training). 1351 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov 2.6 Probing Intermediate Checkpoints Until now we measured correlated probing accuracy on top of fully-trained models with the amount of pruning those models underwent. Here we use a complementary lens, by analyzing how quickly a pretrained model discovers di\ufb00erent types of linguistic knowledge? For this, we save intermediate checkpoints of RoBERTa models and probe those. Similar to pruning, this has the advantage that all models share the same architecture. The results of the probing accuracy over di\ufb00erent epochs are provided in Fig. 5. Similar to Zhang et al. (2020), we \ufb01nd that syntactic tasks (e.g., POS tagging, dependency, and constituency parsing) seem to reach the top probing performance at the beginning of training (30-60 epochs), while semantic tasks (e.g., NER, SRL) keep improving further. While Tenney et al. (2019a) argued that \u201cBERT rediscovers NLP pipeline\u201d by looking at intermediate layers of a fully pretrained model, the same seems to apply during the training: linguistic knowledge to solve lexical tasks are learned \ufb01rst while more complex tasks are easier solved with representation obtained later in the learning. Inter-Section Interlude We conclude that in general the rediscovery hypothesis seems valid: when language models reach good performance they indeed capture linguistic structures (at least in the case of the English language). So we obtained a negative result when we tried to reject the rediscovery hypothesis. Along with this, positive examples are abundant, including the extensive literature on probing (of which we give an overview in Section 4). This might indicate that the hypothesis is indeed true. The lack of correlation between probing scores and LM performance for one of the considered models only questions the probing methodology but does not reject the very fundamental connection between language modeling and learning linguistic structures. We address this in the next section, where we formalize the connection and show how it holds empirically. The empirical experiments will consist in adversarially removing information that could serve probing accuracy while keeping good LM performance.11 3. An Information-Theoretic Framework Recall that the rediscovery hypothesis asserts that neural language models, in the process of their pretraining, rediscover linguistic knowledge. We will prove this claim by contraposition, which states that without linguistic knowledge, the neural LMs cannot perform at their best. A recent paper of Elazar et al. (2021) has already investigated how linearly removing12 certain linguistic information from BERT\u2019s layers impacts its accuracy of predicting a masked token. They showed empirically that dependency information, part-of-speech tags, and named entity labels are important for word prediction, while syntactic constituency boundaries (which mark the beginning and the end of a phrase) are not. One of the questions raised by the authors is how to quantify the relative importance of di\ufb00erent properties encoded in the representation for the word prediction task. The current section of our work 11. A more restricted scenario, where such adversarial training is performed on CoVe is reported in Appendix C. 12. Removing linearly means that a linear classi\ufb01er cannot predict the required linguistic property with above majority class accuracy. 1352 \fLanguage Modeling and Linguistic Structures attempts to answer this question\u2014we provide a metric \u03c1 that is a reliable predictor of such importance. This metric occurs naturally when we take an information-theory lens and develop a theoretical framework that ties together linguistic properties, word representations, and language modeling performance. We show that when a linguistic property is removed from word vectors, the decline in the quality of a language model depends on how strongly the removed property is interdependent with the underlying text, which is measured by \u03c1: a greater \u03c1 leads to a greater drop. The proposed metric has an undeniable advantage: its calculation does not require word representations themselves or a pretrained language model. All that is needed is the text and its linguistic annotation. Thanks to our Theorem 1, we can express the in\ufb02uence of a linguistic property on the word prediction task in terms of the coe\ufb03cient \u03c1. 3.1 Notation We will use plain-faced lowercase letters (x) to denote scalars and plain-faced uppercase letters (X) for random variables. Bold-faced lowercase letters (x) will denote vectors\u2014both random and non-random\u2014in the Euclidean space Rd, while bold-faced uppercase letters (X) will be used for matrices. Assuming there is a \ufb01nite vocabulary W, members of that vocabulary are called tokens. A sentence W1:n is a sequence of tokens Wi \u2208W, this is W1:n = [W1, W2, . . . , Wn]. A linguistic annotation T of a sentence W1:n may take di\ufb00erent forms. For example, it may be a sequence of per-token tags T = [T1, T2, . . . , Tn], or a parse-tree T = (V, E) with vertices V = {W1, . . . , Wn} and edges E \u2282V \u00d7V. We only require that T is a deterministic function of W1:n.13 A (masked) language model is formulated as the probability distribution q\u03b8(Wi | \u03bei) \u2248 Pr(Wi | Ci), where Ci is the context of Wi (see below for di\ufb00erent types of context), and \u03bei is the vector representation of Ci. The cross-entropy loss of such a model is \u2113(Wi, \u03bei) := E(Wi,Ci)\u223cD[\u2212log q\u03b8 (Wi | \u03bei)], where D is the true joint distribution of word-context pairs (W, C). For a random variable X, its entropy is denoted by H[X]. For a pair of random variables X and Y , their mutual information is denoted by I[X; Y ]. In Appendix D we provide the necessary background on information theory, and we refer the reader to refer to it if needed. Discreteness of representations. Depending on the LM, Ci is usually either the left context [W1, . . . , Wi\u22121] (for a causal LM) or a subsequence of the bidirectional context [W1, . . . , Wi\u22121, Wi+1, . . . , Wn] (for a masked LM). Although the possible set of all such contexts C is in\ufb01nite, it is still countable. Thus the set of all contextual representations {\u03be : \u03be is a vector representation of C | C \u2208C} is also countable. Hence, we treat \u03be as discrete random vector. 13. Although in reality two people can give two di\ufb00erent annotations of the same text due to inherent ambiguity of language or di\ufb00erent linguistic theories, we will treat T as the \ufb01nal\u2014also called gold\u2014 annotation after disagreements are resolved between annotators and a common reference annotation is agreed upon. 1353 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov 3.2 Main Result Our main result is the following Theorem 1. Let 1. xi be a (contextualized) embedding of a token Wi in a sentence W1:n, and denote \u03c3i := I[Wi; xi]/ H[W1:n], (1) 2. T be a linguistic annotation of W1:n, and the dependence between T and W1:n is measured by the coe\ufb03cient \u03c1 := I[T; W1:n]/H[W1:n], (2) 3. \u03c1 > 1 \u2212\u03c3i, 4. \u02dc xi be a (contextualized) embedding of Wi that contains no information on T. Then the decline in the language modeling quality when using \u02dc xi instead of xi is approximately supralinear in \u03c1: \u2113(Wi, \u02dc xi) \u2212\u2113(Wi, xi) \u2a86H[W1:n] \u00b7 \u03c1 + c (3) for \u03c1 > \u03c10, with constants \u03c10 > 0, and c depending on H[W1:n] and I[Wi; xi]. Proof. The proof is given in Appendix E. Here we provide a less formal argument. Using visualization tricks as in Olah (2015) we can illustrate the essence of the proof by Figure 6. First of all, look at the entropy H[X] as the amount of information in the variable X. Figure 6: Illustration of Theorem 1. Imagine amounts of information as bars. These bars overlap if there is shared information between the respective variables. The annotation T and the embedding vector \u02dc xi are derived from the underlying text W1:n, thus W1:n contains more information than T or \u02dc xi, hence H[W1:n] fully covers H[T] and H[\u02dc xi]. Since the mutual information I[Wi; \u02dc xi] cannot exceed the information in \u02dc xi, we can write I[Wi; \u02dc xi] \u2264H[\u02dc xi]. (4) 1354 \fLanguage Modeling and Linguistic Structures Now recall the Eq. (2): it simply means that \u03c1 is the fraction of information that is left in T after it was derived from W1:n. This immediately implies that the information which is not in T (but was in W1:n initially) is equal to (1 \u2212\u03c1) \u00b7 H[W1:n]. Since the embedding \u02dc xi contains no information about the annotation T, H[\u02dc xi] and H[T] do not overlap. But this means that H[\u02dc xi] \u2264(1 \u2212\u03c1) \u00b7 H[W1:n], (5) because \u02dc xi is in the category of no information about T. Combining (1), (4), (5), and the assumption \u03c1 > 1 \u2212\u03c3i, we have I[Wi; \u02dc xi] \u2264(1 \u2212\u03c1) \u00b7 H[W1:n] < \u03c3i \u00b7 H[W1:n] = I[Wi; xi], \u21d2 I[Wi; xi] \u2212I[Wi; \u02dc xi] > (\u03c1 + \u03c3i \u22121) \u00b7 H[W1:n]. (6) The inequality (6) is almost the required (3)\u2014it remains to show that the change in mutual information can be approximated by the change in LM loss; this is done in Lemma E.2. Role of \u03c1 and \u03c3i. Equation (2) quanti\ufb01es the dependence between T and W1:n, and it simply means that the annotation T carries 100 \u00b7 \u03c1% of information contained in the underlying text W1:n (in the information-theoretic sense). The quantity \u03c1 is well known as the entropy coe\ufb03cient in the information theory literature (Press et al., 2007). It can be thought of as an analog of the correlation coe\ufb03cient for measuring not only linear or monotonic dependence between numerical variables but any kind of statistical dependence between any kind of variables (numerical and non-numerical). As we see from Equation (3), the coe\ufb03cient \u03c1 plays a key role in predicting the LM degradation when the linguistic structure T is removed from the embedding xi. In Section 3.3 we give a practical way of its estimation for the case when T is a per-token annotation of W1:n. Similarly, Equation (1) means that both Wi and xi carry at least 100\u00b7\u03c3i% of information contained in W1:n. By Firth\u2019s distributional hypothesis (Firth, 1957),14 we assume that \u03c3i signi\ufb01cantly exceeds zero. Range of \u03c1. In general, mutual information is non-negative. Mutual information of the annotation T and the underlying text W1:n cannot exceed information contained in either of these variables, i.e. 0 \u2264I[T; W1:n] \u2264H[W1:n], and therefore \u03c1 \u2208[0, 1]. Absence of information. When we write \u201c\u02dc xi contains no information on T\u201d, this means that the mutual information between \u02dc xi and T is zero: I[T; \u02dc xi] = 0. (7) In the language of Pimentel et al. (2020a), Equation (7) assumes that all probes\u2014even the best\u2014perform poorly in extracting T from \u02dc xi. This essentially means that the information on T has been \ufb01ltered out of \u02dc x. In practice, we will approximate this with the techniques of Gradient Reversal Layer (Ganin & Lempitsky, 2015) and Iterative Nullspace Projection (Ravfogel et al., 2020). 14. \u201cyou shall know a word by the company it keeps\u201d 1355 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov 3.3 Experiments In this section, we empirically verify the prediction of our theory\u2014the stronger the dependence of a linguistic property with the underlying text the greater the decline in the performance of a language model that does not have access to such property. Here we will focus on only one contextualized embedding model, BERT because it is the most mainstream model. Along with this, we will keep the SGNS for consideration as a model of static embeddings.15 3.3.1 Removal Techniques We consider two ways of removing linguistic information from the embeddings: Gradient Reversal Layer (GRL) and Iterative Nullspace Projection (INLP). GRL (Ganin & Lempitsky, 2015) is a method of adversarial training that, in our case, can be used to remove linguistic information T from word embeddings \u03be. For this, the pretraining model is formulated as c W = \u03c9(\u03be), and an auxiliary model is b T = \u03c4(\u03be). The training procedure optimizes min \u03c9,\u03c4,\u03be [\u2113(\u03c9(\u03be), W) + \u2113(\u03c4(\u03b3\u03bb(\u03be)), T)] where \u2113(\u00b7, \u00b7) is the loss function, and \u03b3\u03bb is a layer inserted between \u03be and \u03c4 which acts as the identity during the forward pass, while it scales the gradients passed through it by \u2212\u03bb during backpropagation. In theory, the resulting embeddings \u02dc x are maximally informative for the pretraining task while at the same time minimally informative for the auxiliary task. However, in practice, GRL not always succeeds to fully remove the auxiliary information from the embeddings as was shown by Elazar and Goldberg (2018) (and con\ufb01rmed by our experiments in Appendix F). INLP (Ravfogel et al., 2020) is a method of post-hoc removal of some property T from the pretrained embeddings x. INLP neutralizes the ability to linearly predict T from x (here T is a single tag, and x is a single vector). It does so by training a sequence of auxiliary models \u03c41, . . . , \u03c4k that predict T from x, interpreting each one as conveying information on unique directions in the latent space that correspond to T, and iteratively removing each of these directions. In the ith iteration, \u03c4i is a linear model16 parameterized by a matrix Ui and trained to predict T from x. When the embeddings are projected onto null(Ui) by a projection matrix Pnull(Ui), we have UiPnull(Ui)x = 0, i.e. \u03c4i will be unable to predict T from Pnull(Ui)x. Figure 7 illustrates the method for the case when the property T has only two types of tags and x \u2208R2. The number of iterations k is taken such that no linear classi\ufb01er achieves above-majority accuracy when predicting T from \u02dc x = Pnull(Uk)Pnull(Uk\u22121) . . . Pnull(U1)x. Before measuring the LM loss we allow the 15. Recall that SGNS stands for skip-gram with negative sampling. Skip-gram (Mikolov et al., 2013a) is a masked language model with all tokens but one in a sequence being masked. SGNS approximates its cross-entropy loss by negative sampling procedure (Bengio & Senecal, 2003). 16. Ravfogel et al. (2020) use the Linear SVM (Cortes & Vapnik, 1995), and we follow their setup. 1356 \fLanguage Modeling and Linguistic Structures Figure 7: Nullspace projection for a 2-dimensional binary classi\ufb01er. The decision boundary of Ui is Ui\u2019s null-space. Source: Ravfogel et al. (2020). model to adapt to the modi\ufb01ed embeddings \u02dc x by \ufb01ne-tuning its softmax layer that predicts W from \u02dc x, while keeping the rest of the model (encoding layers) frozen. This \ufb01ne-tuning does not bring back information on T as the softmax layer linearly separates classes. 3.3.2 Tasks We perform experiments on two real tasks and a series of synthetic tasks. In real tasks, we cannot arbitrarily change \u03c1\u2014we can only measure it for some given annotations. Although we have several real annotations that give some variation in the values of \u03c1, we want more control over this metric. Therefore, we come up with synthetic annotations that allow us to smoothly change \u03c1 in a wider range and track the impact on the loss function of a language model. To remove a property T from embeddings, INLP requires a small annotated dataset, while GRL needs the whole pretraining corpus to be annotated.17 Therefore for INLP we use gold annotations, while for GRL we tag the pretraining data with the help of the Stanza tagger (Qi et al., 2020). Real tasks. We consider two real tasks with per-token annotations: part-of-speech tagging (POS) and named entity labeling (NER). The choice of per-token annotations is guided by the suggested method of estimating \u03c1 (Section 3.3.3). POS is the syntactic task of assigning tags such as noun, verb, adjective, etc. to individual tokens. We consider POS tagged datasets that are annotated with di\ufb00erent tagsets: \u2022 Universal Dependencies English Web Treebank (UD EWT, Silveira et al., 2014), which uses two annotation schemas: UPOS corresponding to 17 universal tags across languages, and FPOS which encodes additional lexical and grammatical properties. 17. One may argue that GRL can be applied to a pretrained LM in a \ufb01ne-tuning mode. Our preliminary experiments show that in this case, the linear probe\u2019s accuracy remains way above the majority class accuracy. See Appendix F for details. 1357 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov \u2022 English part of the OntoNotes corpus (Weischedel et al., 2013) based on the Penn Treebank annotation schema (Marcus et al., 1993). NER is the task of predicting the category of an entity referred to by a given token, e.g. does the entity refer to a person, a location, an organization, etc. This task is taken from the English part of the OntoNotes corpus. Note that the Stanza tagger, used to generate annotations for GRL training, relies on universal tagset from Universal Dependencies (UPOS) for POS tagging, and on OntoNotes NER annotations. Table 2 reports some statistics on the size of di\ufb00erent tagsets. UD UPOS UD FPOS ON POS ON NER 17 50 48 66 Table 2: Size of the tagsets for Universal Dependencies and OntoNotes PoS tagging and NER datasets. Synthetic tasks are created as follows. Let T (0) be an annotation of a corpus W1:n, which has m unique tags, and let the corresponding \u03c1(0) := I[T (0); W1:n]/ H[W1:n]. We select the two least frequent tags from the tagset, and con\ufb02ate them into one tag. This gives us an annotation T (1) which contains less information about W1:n than the annotation T (0), and thus has \u03c1(1) < \u03c1(0). In Table 3 we give an example of such con\ufb02ation for a POS-tagged sentence from the OntoNotes corpus (Weischedel et al., 2013). W1:9 When a browser starts to edge near to consuming T (0) WRB DT NN VBZ TO VB RB IN VBG T (1) X DT NN VBZ X VB RB IN VBG W10:18 500 MB of RAM on a regular basis , T (0) CD NNS IN NN IN DT JJ NN , T (1) CD NNS IN NN IN DT JJ NN , W19:22 something is wrong . T (0) NN VBZ JJ . T (1) NN VBZ JJ . Table 3: Example of con\ufb02ating two least frequent tags (WRB and TO) into one tag (X). Next, we select two least frequent tags from the annotation T (1) and con\ufb02ate them. This will give an annotation T (2) with \u03c1(2) < \u03c1(1). Iterating this process m \u22121 times we will end up with the annotation T (m\u22121) that tags all tokens with a single (most frequent) tag. In this last iteration, the annotation has no mutual information with W1:n, i.e. \u03c1(m\u22121) = 0. 1358 \fLanguage Modeling and Linguistic Structures 3.3.3 Experimental Setup We remove (pseudo-)linguistic structures (Section 3.3.2) from BERT and SGNS embeddings using the methods from Section 3.3.1,18 and measure the decline in the language modeling performance. Assuming that \u2206\u2113\u03c9,T,\u00b5 is the validation loss increase of the embedding model \u03c9 \u2208{BERT, SGNS} when information on T \u2208{Synthetic, POS, NER} is removed from \u03c9 using the removal method \u00b5 \u2208{GRL, INLP}, we compare |\u2206\u2113\u03c9,T,\u00b5| against \u03c1 de\ufb01ned by (2) which is the strength of interdependence between the underlying text W1:n and its annotation T. By Theorem 1, \u2206\u2113\u03c9,T,\u00b5 is \u2126(\u03c1)19 for any combination of \u03c9, T, \u00b5. Estimating \u03c1. Recall that \u03c1 := I[T; W1:n]/ H[W1:n] (Eq. 2) and that the annotation T is a deterministic function of the underlying text W1:n (Sec. 3.1). In this case, we can write20 \u03c1 = H[T] \u2212 0 z }| { H[T | W1:n] H[W1:n] = H[T] H[W1:n]. (8) and when T is a per-token annotation of W1:n, i.e. T = T1:n (which is the case for the annotations that we consider), this becomes \u03c1 = H[T1:n]/ H[W1:n]. Thus to estimate \u03c1, we simply need to be able to estimate the latter two entropies. This can be done by training an autoregressive sequence model, such as LSTM, on W1:n and on T1:n. The loss function of such a model\u2014the cross-entropy loss\u2014serves as an estimate of the required entropy. Notice, that we cannot use masked LMs for this estimation as they do not give a proper factorization of the probability p(w1, . . . , wn). Thus, we decided to choose the AWD-LSTM-MoS model of Yang et al. (2018) which is a compact and competitive LM21 that can be trained in a reasonable time and with moderate computing resources. In addition, we also estimated the entropies through a vanilla LSTM with tied input and output embeddings (Inan et al., 2017), and a Kneser-Ney 3-gram model22 (Ney et al., 1994) to test how strongly our method depends on the underlying sequence model. Limitations. The suggested method of estimating \u03c1 through autoregressive sequence models is limited to per-token annotations only. However, according to formula (8), to estimate \u03c1 for deeper annotations, it is su\ufb03cient to be able to estimate the entropy H[T] of such deeper linguistic structures T. For example, to estimate the entropy of a parse tree, one can use the cross-entropy produced by a probabilistic parser. The only limitation is the determinism of the annotation process. Amount of information to remove. Each of the removal methods has a hyperparameter that controls how much of the linguistic information T is removed from the word vectors x\u2014the number of iterations in INLP and \u03bb in GRL. Following Elazar et al. (2021) we keep 18. INLP is applied to the last layers of BERT and SGNS. 19. as a reminder, this means that there exist constants c0, c1, \u03c10 s.t. \u2206\u2113\u03c9,T,\u00b5 \u2265c1 \u00b7 \u03c1 + c0 for \u03c1 \u2265\u03c10. 20. In general, for two random variables X and Y , Y = f(X) if and only if H[Y | X] = 0. 21. https://paperswithcode.com/sota/language-modelling-on-wikitext-2 22. We used the SRILM toolkit (Stolcke, 2002). We used Witten-Bell discounting for annotations because modi\ufb01ed Knesser-Ney discounting does not apply to them. This is a known issue with Knesser-Ney when the vocabulary size (number of unique tags) is small. See item C3 at http://www.cs.cmu.edu/afs/cs/ project/cmt-55/lti/Courses/731/homework/srilm/man/html/srilm-faq.7.html. 1359 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov iterating the INLP procedure or increasing \u03bb in GRL until the performance of a linear probe that predicts T from the \ufb01ltered embeddings \u02dc x drops to the majority-class accuracy. When this happens we treat the resulting \ufb01ltered embeddings \u02dc x as containing no information on T. Optimization details. For the INLP experiments we use pretrained BERT-Base from HuggingFace (Wolf et al., 2020), and an SGNS pretrained in-house (Assylbekov, 2020). For the GRL experiments we pretrain LMs ourselves, details are in Appendix A. 3.3.4 Results Real tasks. Table 4 reports the loss drop of pretrained LM when removing linguistic information (POS or NER) from the pretrained model. Plots that illustrate how losses and probing accuracies change with INLP iterations are provided in Appendix G. First, we compare UPOS tagsets versus FPOS tagset: intuitively FPOS should have a tighter link with underlying text, and therefore result in higher \u03c1 and as a consequence in a higher drop in loss after the removal of this information from words representations. This is con\ufb01rmed by the numbers reported in Table 4. We also see a greater LM performance drop when the POS information is removed from the models compared to NE information removal. This is in line with Theorem 1 as POS tags depend stronger on the underlying text than NE labels as measured by \u03c1. In the case of pretraining SGNS with GRL, we could not reach the majority class accuracy.23 Reported is the loss increase for the model that had the lowest accuracy (Acc = .466 while the majority accuracy for this task is .126). If it were possible to train the SGNS model with GRL removing POS tags and reaching the majority class accuracy, its loss increase would be even higher than the one reported. Removal method \u00b5 INLP GRL Annotation T ON NER UD UPOS UD FPOS ON POS NER UPOS \u03c1 (AWD-LSTM-MoS) 0.18 0.32 0.36 0.42 0.18 0.43 \u03c1 (LSTM) 0.18 0.32 0.37 0.42 0.22 0.34 \u03c1 (KN-3) 0.18 0.36 0.41 0.42 0.17 0.37 \u2206\u2113BERT,T,\u00b5 0.13 0.54 0.70 0.87 6.16 6.16 \u2206\u2113SGNS,T,\u00b5 1.33 1.62 1.79 2.04 0.32 >1.1\u2020 Table 4: Results on POS tagging and NER tasks. ON stands for OntoNotes, UD for Universal Dependencies. The UD EWT dataset has two types of POS annotation: coarse tags (UPOS) and \ufb01ne-grained tags (FPOS). \u2020GRL for SGNS on UPOS annotation did not reach the majority class accuracy (.126). KN-3 is a Kneser-Ney 3-gram model. We note larger \u2206\u2113BERT,T,GRL compared to \u2206\u2113BERT,T,INLP. We attribute it to the following: INLP works with already pretrained optimal \u03c9 and we \ufb01ne-tune its softmax layer 23. This is because GRL does not always fully remove all the auxiliary task information from the main model (Elazar & Goldberg, 2018). 1360 \fLanguage Modeling and Linguistic Structures for modi\ufb01ed embeddings \u02dc x, while GRL with a large \u03bb (that is needed to bring the probing accuracy down to a majority class) moves \u03c9\u2019s weights into a bad local minimum when pretraining from scratch. Moreover, the pretrained BERT from HuggingFace and RoBERTa pretrained by us are pretrained over di\ufb00erent corpora and are using di\ufb00erent vocabularies. Thus increases in the cross-entropies caused by INLP and by GRL are not directly comparable. We attempted to conduct experiments on GRL in such a way that their results were comparable to the results of the INLP\u2014we tried to \ufb01ne-tune a pretrained RoBERTa with GRL. However, our attempt was not successful as we did not manage to reach majority class accuracy (more details in Appendix F). This reinforces Elazar and Goldberg\u2019s (2018) argument that GRL has more di\ufb03culties removing fully the auxiliary information from the word representations. Finally, we see that although \u03c1 indeed depends on the underlying sequence model that is used to estimate the entropies H[T1:n] and H[W1:n], all models\u2014AWD-LSTM-MoS, LSTM, and KN-3\u2014preserve the relative order for the annotations that we consider. E.g., all models indicate that OntoNotes NER annotation is the least interdependent with the underlying text, while OntoNotes POS annotation is the most interdependent one. In addition, it turns out that for a quick estimate of \u03c1, one can use the KN-3 model, which on a modern laptop calculates the entropy of texts of 100 million tokens in a few minutes, in contrast to the LSTM, which takes several hours on a modern GPU. Synthetic tasks. To obtain synthetic data, we apply the procedure described in Sect. 3.3.2 to the OntoNotes POS annotation as it has the highest \u03c1 in Table 4 and thus allows us to vary the metric in a wider range. The results of evaluation on the synthetic tasks through the INLP24 are provided in Figure 8. They validate the predictions of our theory\u2014for the annotations with greater \u03c1 there is a bigger drop in the LM performance (i.e. increase in the LM loss) when the information on such annotations is removed from the embeddings. We notice that \u2206\u2113is piecewise-linear in \u03c1 with the slope changing at \u03c1 \u22480.4. We attribute this change to the following: for \u03c1 < 0.4, the majority class (i.e. the most frequent tag) is the tag that encapsulates several con\ufb02ated tags (see Subsection 3.3.2 for details), while for \u03c1 > 0.4, the majority is NN tag. This switch causes a signi\ufb01cant drop in the majority class accuracy which in turn causes a signi\ufb01cant increase in the number of INLP iterations to reach that accuracy, and hence an increase in the amount of information being removed which implies greater degradation of the LM performance. 4. Related Work Theoretical analysis. Since the success of early word embedding algorithms like SGNS and GloVe, there were attempts to analyze theoretically the connection between their pretraining objectives and performance on downstream tasks such as word similarity and word analogy tasks. An incomplete list of such attempts includes those of Levy and Goldberg (2014), Arora et al. (2016), Hashimoto et al. (2016), Gittens et al. (2017), Tian et al. (2017), 24. We did not perform GRL experiments on synthetic tasks as they are computationally expensive\u2014 \ufb01nding \u03bb for each annotation requires multiple pretraining runs. 1361 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov Figure 8: Synthetic task results for INLP. \u2206\u2113is the increase in cross-entropy loss when pseudo-linguistic information is removed from the BERT\u2019s last layer with the INLP procedure. \u03c1 is estimated with the help of AWD-LSTM-MoS model (Yang et al., 2018) as described in Section 3.3.3. Ethayarajh et al. (2019), Allen and Hospedales (2019). Most of these works represent pretraining as a low-rank approximation of some co-occurrence matrix\u2014such as PMI\u2014and then use an empirical fact that the set of columns (or rows) of such a matrix is already a good solution to the analogy and similarity tasks. Recently, we have seen a growing number of works devoted to the theoretical analysis of contextualized embeddings. Kong et al. (2020) showed that modern embedding models, as well as the old warrior SGNS, maximize an objective function that is a lower bound on the mutual information between di\ufb00erent parts of the text. Lee et al. (2020) formalized how solving certain pretraining tasks allows learning representations that provably decrease the sample complexity of downstream supervised tasks. Of particular interest is a recent paper by Saunshi et al. (2021) that relates a pretraining performance of an autoregressive LM with a downstream performance for downstream tasks that can be reformulated as next word prediction tasks. The authors showed that for such tasks, if the pretraining objective is \u03f5-optimal,25 then the downstream objective of a linear classi\ufb01er is O(\u221a\u03f5)-optimal. In the second part of our work (Section 3) we prove a similar statement, but the di\ufb00erence is that we study how the removal of linguistic information a\ufb00ects the pretraining objective and our approach is not limited to downstream tasks that can be reformulated as next word prediction. Probing. Early work on probing tried to analyze LSTM language models (Linzen et al., 2016; Shi et al., 2016; Adi et al., 2017; Conneau et al., 2018; Gulordava et al., 2018; Kuncoro et al., 2018; Tenney et al., 2019b). Moreover, word similarity (Finkelstein et al., 2002) and word analogy (Mikolov et al., 2013d) tasks can be regarded as non-parametric probes of static embeddings such as SGNS (Mikolov et al., 2013b) and GloVe (Pennington et al., 2014). Recently the probing approach has been used mainly for the analysis of contextualized word embeddings. Hewitt and Manning (2019) for example showed that entire parse trees can be linearly extracted from ELMo\u2019s (Peters et al., 2018) and BERT\u2019s (Devlin et al., 25. Saunshi et al. (2021) say that the pre-training loss \u2113is \u03f5-optimal if \u2113\u2212\u2113\u2217\u2264\u03f5, where \u2113\u2217is the minimum achievable loss. 1362 \fLanguage Modeling and Linguistic Structures 2019) hidden layers. Tenney et al. (2019b) probed contextualized embeddings for various linguistic phenomena and showed that, in general, contextualized embeddings improve over their non-contextualized counterparts largely on syntactic tasks (e.g., constituent labeling) in comparison to semantic tasks (e.g., coreference). The probing methodology has also shown that BERT learns some re\ufb02ections of semantics (Reif et al., 2019) and factual knowledge (Petroni et al., 2019b) into the linguistic form which are useful in applications such as word sense disambiguation and question answering respectively. Zhang et al. (2020) analyzed how the quality of representations in a pretrained model evolves with the amount of pretraining data. They performed extensive probing experiments on various NLU tasks and found that pretraining with 10M sentences was already able to solve most of the syntactic tasks, while it required 1B training sentences to be able to solve tasks requiring semantic knowledge (such as Named Entity Labeling, Semantic Role Labeling, and some others as de\ufb01ned by Tenney et al. (2019b)). Closest to our second part (Section 3), Elazar et al. (2021) propose to look at probing from a di\ufb00erent angle, proposing amnesic probing which is de\ufb01ned as the drop in performance of a pretrained LM after the relevant linguistic information is removed from one of its layers. The notion of amnesic probing fully relies on the assumption that the amount of the linguistic information contained in the pretrained vectors should correlate with the drop in LM performance after this information is removed. In this work (Section 3) we theoretically prove this assumption. While Elazar et al. (2021) measured LM performance as word prediction accuracy, we focus on the native LM cross-entropy loss. In addition, we answer one of the questions raised by the authors on how to measure the in\ufb02uence of di\ufb00erent linguistic properties on the word prediction task\u2014we provide an easy-to-estimate metric that does exactly this. Criticism of probing. The probing approach has been criticized from di\ufb00erent angles. Our attempt to systematize this line of work is given in Figure 9. The seminal paper of Hewitt and Liang (2019) raises the issue of separation between extracting linguistic structures from contextualized embeddings and learning such structures by the probes themselves. This dichotomy was challenged by Pimentel et al. (2020b), Maudslay et al. (2020), but later validated by Zhu and Rudzicz (2020) using an information-theoretic view on probing. Meanwhile, methods were proposed that take into account not only the probing performance but also the ease of extracting linguistic information (Voita & Titov, 2020) or the complexity of the probing model (Pimentel et al., 2020a). At the same time, Wu et al. (2020) and Michael et al. (2020) suggested avoiding learnability issues by non-parametric probing26 and weak supervision respectively. The remainder of the criticism is directed at the limitations of probing such as insu\ufb03cient reliability for low-resourced languages (Eger et al., 2020), lack of evidence that probes indeed extract linguistic structures but do not learn from the linear context only (Kunz & Kuhlmann, 2020), lack of correlation with \ufb01ne-tuning scores (Tamkin et al., 2020) and with pretraining scores (Ravichander et al., 2020; Elazar et al., 2021). The \ufb01rst part of our work (Section 2) partly falls into this latter group, as we did not \ufb01nd 26. Parametric probes transform embeddings to linguistic structures using parameterized operations on vectors (such as feedforward layers). Non-parametric probes transform embeddings using non-parameterized operations on vectors (such as vector addition/subtraction, inner product, Euclidean distance, etc.). The approach of Wu et al. (2020) builds a so-called impact matrix and then feeds it into a graph-based algorithm to induce a dependency tree, all done without learning any parameters. 1363 \fNikoulina, Tezekbayev, Kozhakhmet, Babazhanova, Gall\u00e9, & Assylbekov Avoid Learnability Issues Limitations of Probing Extractability = Learnability Extractability \u2260 Learnability, Accuracy-Complexity Tradeoff Control tasks (Hewitt & Liang, 2019) Information-theoretic view of probing (Pimentel et al., 2020) Parsing as syntactic probing (Maudslay et al., 2020) Validity of Hewitt & Liang\u2019s dichotomy (Zhu & Rudzicz, 2020) MDL analysis (Voita & Titov, 2020) Pareto probing (Pimentel et al., 2020) Non-parametric probing (Wu et al., 2020) Latent subclass learning (Michael et al., 2020) Lack of correlation with fine-tuning scores (Tamkin et al., 2020) Unreliability for low-resource languages (Eger et al., 2020) Lack of correlation with pre-training scores (Ravichander et al., 2020; Elazar et al., 2020; Our work) Context-only hypothesis (Kunz & Kuhlmann, 2020) Figure 9: Criticism and improvement of the probing methodology. An arrow A \u2192B means that B criticizes and/or improves A. any evidence for a correlation between probing scores and pretraining objectives for better performing CoVe (McCann et al., 2017). Pruning language models. A recent work by Gordon et al. (2020) compressed BERT using conventional pruning and showed a linear correlation between pretraining loss and downstream task accuracy. Chen et al. (2020) pruned pretrained BERT with LTH and \ufb01netuned it to downstream tasks, while Prasanna et al. (2020) pruned \ufb01ne-tuned BERT with LTH and then re-\ufb01ne-tuned it. Sanh et al. (2020) showed that the weights needed for speci\ufb01c tasks are a small subset of the weights needed for masked language modeling, but they prune during \ufb01ne-tuning which is beyond the scope of our work. Zhao et al. (2020) propose to learn the masks of the pretrained LM as an alternative to \ufb01netuning on downstream tasks and shows that it is possible to \ufb01nd a subnetwork of large pretrained model which can reach the performance on the downstream tasks comparable to \ufb01netuning on this task. Generally speaking, the \ufb01ndings of the above-mentioned papers are aligned with our \ufb01ndings that the performance of pruned models on downstream tasks is correlated with the pretraining loss. The one di\ufb00erence from Section 2 of our work is that most of the previous work looks at the performance of \ufb01ne-tuned pruned models. In our work, we probe pruned models, i.e. the remaining weights of language models are not adjusted to the downstream probing task. It is not obvious whether the conclusions from the former should carry over to the latter. 1364 \fLanguage Modeling and Linguistic Structures 5." + } + ], + "Alexandre Alcoforado": [ + { + "url": "http://arxiv.org/abs/2401.13229v1", + "title": "From Random to Informed Data Selection: A Diversity-Based Approach to Optimize Human Annotation and Few-Shot Learning", + "abstract": "A major challenge in Natural Language Processing is obtaining annotated data\nfor supervised learning. An option is the use of crowdsourcing platforms for\ndata annotation. However, crowdsourcing introduces issues related to the\nannotator's experience, consistency, and biases. An alternative is to use\nzero-shot methods, which in turn have limitations compared to their few-shot or\nfully supervised counterparts. Recent advancements driven by large language\nmodels show potential, but struggle to adapt to specialized domains with\nseverely limited data. The most common approaches therefore involve the human\nitself randomly annotating a set of datapoints to build initial datasets. But\nrandomly sampling data to be annotated is often inefficient as it ignores the\ncharacteristics of the data and the specific needs of the model. The situation\nworsens when working with imbalanced datasets, as random sampling tends to\nheavily bias towards the majority classes, leading to excessive annotated data.\nTo address these issues, this paper contributes an automatic and informed data\nselection architecture to build a small dataset for few-shot learning. Our\nproposal minimizes the quantity and maximizes diversity of data selected for\nhuman annotation, while improving model performance.", + "authors": "Alexandre Alcoforado, Thomas Palmeira Ferraz, Lucas Hideki Okamura, Israel Campos Fama, Arnold Moya Lavado, B\u00e1rbara Dias Bueno, Bruno Veloso, Anna Helena Reali Costa", + "published": "2024-01-24", + "updated": "2024-01-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction In real-life scenarios, particularly in the realm of Machine Learning (ML) in Natural Language Processing (NLP), annotated data is often a scarce and challenging resource to acquire. In many cases, researchers and practitioners are faced with the daunting task of developing accurate models with extremely limited or even non-existent annotated training data. To address this challenge, the process is typically initiated by building a small annotated dataset and using it as a basis for training ML models using supervised learning methods. Subsequently, this process can be iterated by creating annotated datasets of increasing size through techniques commonly referred to as Active Learning (AL) (Ren et al., 2021). As an alternative approach to acquiring annotated data, crowdsourcing platforms like Amazon Mechanical Turk have been used in recent years. However, relying solely on human annotation services from these platforms brings its own set of challenges (Nowak and R\u00fcger, 2010; Karpinska et al., 2021). Variability in expertise among annotators often results in inconsistent annotation criteria and, at times, conflicting annotations. Moreover, human annotators may encounter difficulties when dealing with large datasets, leading to errors and delays in data annotation processes. An additional concern lies in the potential introduction of bias through annotators\u2019 subjectivity and personal biases, which can negatively affect the performance of trained models. To mitigate these challenges, numerous research works have attempted to address these issues, either by selecting high-quality annotators in multiple-annotated-data setups or by employing diverse methods to weight each annotator\u2019s input (Zhang et al., 2023a; Hsueh et al., 2009; Hovy et al., 2013; Basile et al., 2021). In low-resource settings, a common practice is to randomly sample a subset of the unlabeled data for the annotation process (Tunstall et al., 2022; Beijbom, 2014). This approach involves selecting a few examples at random, which are then annotated to form the initial training dataset. However, arXiv:2401.13229v1 [cs.CL] 24 Jan 2024 \fthis methodology may be suboptimal since it neglects the specific characteristics of the data and the requirements of the learning model. In other words, randomly sampled data may fail to adequately represent the full spectrum of classes or concepts present within the dataset. The advent of zero-shot methods has provided an intriguing approach to perform initial annotation without any annotated training data (Alcoforado et al., 2022). Nonetheless, historical shortcomings have often placed zero-shot methods behind their few-shot counterparts in terms of performance. Recent strides in the field of NLP, particularly the emergence of general-purpose Large Language Models (LLMs), have opened up exciting avenues in multi-task learning and zero-shot problem-solving (Ferraz et al., 2023). These models exhibit remarkable skills across various tasks (Brown et al., 2020; Touvron et al., 2023) but still encounter difficulties when adapting to specific domains where highly specialized knowledge may be entirely absent from their training data (Yang et al., 2023; Zhang et al., 2023b). In the realm of few-shot text classification, the challenge of acquiring annotated data becomes increasingly daunting, particularly when confronted with imbalanced datasets (Ferraz et al., 2021). Common benchmark datasets used for few-shot text classification tasks often exhibit a semblance of balance or slight imbalance. However, such datasets represent rare exceptions in the real-world landscape, where data distributions are typically skewed and imbalanced, mirroring the inherent complexity of practical scenarios. The prevalence of imbalanced data poses a significant challenge, as traditional random sampling strategies become increasingly suboptimal. In scenarios where one class overwhelmingly dominates, random sampling tends to favor the majority class, resulting in data selection that inadequately represents the underrepresented and rare classes. To address these challenges, in this paper we introduce an innovative automatic data selection architecture for few-shot learning. Our approach is designed to identify the most informative and representative data points that should be annotated by humans in low-resource, annotation-scarce scenarios. It leverages a framework that systematically orders data points based on their likelihood to (i) belong to distinct classes, thereby avoiding unnecessary redundancy in human annotation efforts, and (ii) enhance the overall performance of the learning model. Our evaluation of this approach encompasses various low-resource natural language processing datasets, demonstrating its capacity to minimize redundancy in human annotation efforts and improve model performance compared to traditional random sampling or manual data selection strategies, particularly in cases with a limited number of annotated examples. In summary, this work presents two primary contributions: 1. The introduction of an automatic data selection architecture for few-shot learning that leverages active learning principles to identify the most informative and representative data points for annotation. 2. An extensive analysis of various implementations of our architecture, highlighting its effectiveness to build the first version of a dataset in the context of low-resource text classification. Our results emphasize the benefits of informed data selection, which not only streamlines the annotation process but also results in a more diverse set of annotated data. Furthermore, models trained with these diverse datasets exhibit improved performance, which may benefit subsequential iterations of the dataset with Active Learning techniques. Our experiments unveil the potential of informed data selection strategies in addressing the challenges of few-shot learning in low-resource NLP scenarios. 2 Background In low-resource NLP settings, where annotated data is scarce and expensive to obtain, Active Learning (AL) (Ren et al., 2021) methods show themselves as a very promising approach. AL attempts to maximize the performance gain of a model by annotating the smallest number of samples. AL algorithms select data from an unlabeled dataset and query a human annotator only on this selected data, which aims to minimize human efforts in annotation by using only the most informative data. Uncertainty sampling (Zhu et al., 2010) is among the most used method to select which points to be annotated. It employs a single classifier to pinpoint unlabeled instances where the classifier exhibits the lowest confidence. Other approaches include queryby-committee (Kee et al., 2018), where a pool of models is used to find diverse disagreements, margin sampling (Ducoffe and Precioso, 2018), and \fentropy sampling (Li et al., 2011). The first one looks for points where models disagree the most on the predicted labels; while the second selects data points with the highest entropy, indicating the lowest classification probability across all potential classes An essential aspect of AL involves the allocation of annotation budgets. Given that human effort is dedicated to annotating data, it is crucial to maximize its utility and minimize human effort. Various strategies have emerged to address this challenge. Recent research suggests optimizing directly for human effort, while others combine model uncertainty with diverse data representation through diversity sampling. A holistic approach combines these factors with cost-effectiveness, weighting data based on anticipated reductions in loss, classification entropy, and acquisition cost. These approaches collectively aim to minimize redundancy, which occurs when a human annotates a data point that the model would predict the correct label in subsequent iterations. In this work, we deal with the very first version of a dataset, which will serve as the foundation for iterative model improvement using AL methods. Consequently, our primary focus is not on optimizing cost-effectiveness, as the data was obtained through random sampling. Instead, we are exploring alternative data selection strategies to ensure that the initial data pool closely resembles a \u201cnearideal\u201d random sample. This selection should not only minimize unnecessary annotations but also elevate the model\u2019s performance above the random average. To achieve this goal, we employ an uncertainty-based strategy to address two distinct challenges: identifying data points that are distant from the decision boundary and selecting examples that offer a more diverse and informative perspective on the dataset. In addition to uncertainty estimation, various strategies are available for actively selecting data points to enhance low-resource NLP models. Diversity-based methods place their focus on achieving a balance between informativeness and the diversity of concepts or linguistic structures within the selected subset. This approach aims to prevent the model from learning biased information. Such balance can be achieved through techniques like calculating pairwise distances between data points and employing sampling strategies to select diverse examples. For instance, Sener and Savarese (2018) employed the cosine similarity between word vector representations and a k-center greedy algorithm to identify the most diverse subset of data. Meanwhile, Zhang et al. (2021) utilized a mutual information-based criterion to ensure that the selected data points are positioned far apart from each other in the embedding space. Additionally, there are works that combine diversity and uncertainty sampling in order to enhance the model\u2019s performance. 3 Methods To tackle the challenge of determining which data to annotate, we have devised Informed Data Selection methods, which, in practice, can be thought of as ordering algorithms when executed to completion. Random data selection can sometimes result in an imbalanced distribution of labels for human annotation, leading to an overabundance of certain labels while leaving others underrepresented. Our proposed methods also address this issue since the labels are not known before the annotation process. However, our findings indicate that our approach may be conducive to achieving a more equitable distribution of documents across various labels. We contend that our method is particularly well-suited for situations where humans are faced with a complete lack of labeled data. Here, a dataset consists of words, phrases or documents that must be labeled, and will be referred to in this paper as \u201cdocuments\u201d. We have selected random sampling as our baseline method and have developed three additional methods for comparison against this baseline. These methods are constructed using distinct heuristics: (i) The first method assesses semantic similarity and prioritizes documents with low similarity to those already selected; (ii) The second method involves clustering embeddings and systematically selects one document from each cluster based on cluster size; and (iii) The third method employs random sampling to choose documents with lower lexical similarity, excluding those that share too many common n-grams. Further elaboration on these methods is provided below. Let D be a set of documents di. Let E be the set of embeddings for each document di \u2208D. We define C = {c1, ..., cnclasses}, |C| = nclasses, as the set of target classes for the classification task in supervised training. Dselected is the set D rearranged by f according to the Informed DataSelection meth\fods proposed here, with |Dselected| = |D|, Dselected = f(nclasses, D, E), (1) Elements from Dselected are then selected to constitute Da with the most relevant documents for labeling. Let nshots be the target number of annotated documents per class. The set of annotated documents is Da = {Dc1 a , Dc2 a , ..., D cnclasses a }, with |Da| = |Dc1 a | + |Dc2 a | + ... + |D cnclasses a |. Ideally, we want |Dci a | = nshots. The overannotation rate \u03b8 is defined as the excess of documents annotated with the respective method used up to the target number nshots of annotated documents for each class ci \u2208C, with: \u03b8 = |Da|/(nclasses \u2217nshots). (2) It measures the excess of annotated documents generated by the method until the desired target nshots is achieved for each specific class ci. We now describe the three Informed Data Selection methods proposed in this paper. 1) Reverse Semantic Search (RSS): Given a set of documents D, its respective set of embeddings E, and a similarity function between pairs of embeddings sim(x1, x2), RSS calculates the similarity matrix between all embeddings of E. The similarity matrix S is an |D| \u00d7 |D| matrix whose (i, j) element equals the similarity sim(ei, ej) between ei, ej \u2208E, with ei and ej being the embeddings of di, dj \u2208D, di \u0338= dj. RSS initially selects the two documents with the least similarity and puts both in a new set named Dselected. Then, iteratively, RSS continues to select the next most dissimilar element from the rest of the set {D \u2212Dselected}. RSS stops when |Dselected| = |D|. In fact, RSS sorts the documents in D based on their dissimilarity. The idea is that the annotation process is performed for each document in the new set generated Dselected, in order, until at least nshots are obtained for each of the nclasses. 2) Ordered Clustering (OC): Given a set of documents D and its respective set of embeddings E, OC applies a hierarchical and density-based clustering algorithm that assigns a membership probability to each document in relation to each cluster, indicating the probability of that document being in that cluster. Then, OC orders the clusters based on their size, i.e., based on the number of documents that belong to a given cluster. Finally, OC exhaustively selects the document with the lowest membership probability from each cluster, from largest to smallest cluster, and removes it from the cluster, placing it, in removal order, in Dselected. The OC iterative process stops when all clusters are empty. Here too, the annotation process is performed for each document in the new set generated Dselected, in order, until at least nshots are obtained for each of the nclasses. 3) Limited Lexical Similarity (LLS): Given a set of documents D, a lexical comparison function g(d1, d2) (based on BLEU score, ROUGE score or other metrics) and a threshold value \u03b2, LLS chooses the first document di randomly and inserts it into the initially empty set Dselected. LLS then proceeds by choosing the next document di+1 at random, discarding it if g(di+1, di) > \u03b2 and keeping it otherwise. LLS stops when there are no more documents to select. Similar to the RSS and OC methods, the generated set can have many elements. Note that in this case, |Dselected| may be smaller than |D|, given that some documents were discarded. Thus, the annotation takes place by removing documents from Dselected, in the order in which they were inserted in Dselected, until at least nshots are obtained for each of the nclasses. 4 Experimental Setup This section outlines the experimental setup for evaluating our proposed Informed Data Selection architecture. The evaluation is conducted on five text classification datasets, selected to explore varying degrees of data imbalance, class diversity, language, and domain. In this section, we present the datasets used and describe two key experimental settings: Human Annotation and Few-shot learning with selected data. 4.1 Datasets We use the following datasets in our experiments: \u2022 AgNews (Zhang et al., 2015): A news dataset with 4 classes and balanced data distribution. It consists of 120,000 training examples and 7,600 test examples, available only in English. \u2022 SST5 (Socher et al., 2013): A sentiment analysis dataset with 5 classes and a slightly imbalanced data distribution. It contains 8,544 training examples, 1,101 validation examples, and 2,210 test examples, available in English. \u2022 Emotion (Saravia et al., 2018): An emotion analysis dataset with 5 classes and imbalanced data distribution. It includes 16,000 training \fFigure 1: Full Architecture of our Settings. Results from RQ 1 are evaluated with metric Overannotation Rate. Results from RQ 2 use metrics Accuracy and Macro-F1 Score. examples and 2,000 test examples, available in English. \u2022 Multilingual Sentiment Analysis (MSA) 1: A multilingual sentiment analysis dataset with 3 classes and balanced data distribution. We make use of the Portuguese subset of this dataset, that contains 1,839 training examples and 870 test examples. \u2022 BRNews 2: A Brazilian Portuguese news dataset with 19 classes and imbalanced data distribution. It comprises 176,114 training examples and 176,114 test examples, available only in Portuguese. The train and test splits are utilized for training and evaluation, unless specified otherwise. An overview of these datasets is provided in Table 1. The choice of these datasets aims at isolating and scrutinizing key data distribution variables. Our focus centers on examining the impact of factors such as the number of samples per class, the quantity of classes within each dataset, the extent of data imbalance, and the language (English or Portuguese) on the outcomes of Informed Data Selection methods. Table 1: Datasets Characteristics Dataset # docs classes Balancing Lang AgNews 127600 4 balanced En SST5 11855 5 slightly imbalanced En Emotion 18000 6 imbalanced En MSA 3033 3 balanced Pt BRNews 352228 19 very imbalanced Pt 4.2 Research Questions In our study, we aim to address specific research questions through distinct experimental settings, each designed to provide insights into the efficacy of our Informed Data Selection methods. These experimental settings are detailed below. 1Available on https://huggingface.co/datasets/ tyqiangz/multilingual-sentiments 2Available on https://huggingface.co/datasets/ iara-project/news-articles-ptbr-dataset 4.2.1 RQ1: Which method allows for more efficient human annotation? To tackle this question, we simulate a real-life scenario where no annotated data is initially available, and human annotators are required to annotate the data. We compare different sorting methods designed to prioritize annotation and, leveraging known ground-truth, we quantify the overannotation rate (see Eq. 1) that each method might entail. In this context, we compare the performance of our Informed Data Selection methods with that of a random sampling strategy, referred to as Random. 4.2.2 RQ2: Which method yields better few-shot learning? To address this second question, we turn our attention to models trained on the dataset created in the context of RQ1. The goal is to determine whether the more efficient annotation process comes with a price, and could potentially lead to biased models, resulting in decreased performance compared to conventional random sampling. Conversely, our initial hypothesis suggests that Informed Data Selection, by increasing data diversity, will lead to model improvement, as it provides more knowledge with same amount of training data. 4.3 Evaluation Metrics Within the context of RQ1 setting, the primary evaluation metric is the overannotation rate \u03b8 (Eq. 2). This metric is relevant as in resource-constrained scenarios, the imperative lies in the minimization of excessive annotation. For this metric lower values mean more efficiency. As for the RQ2 setting, we employ conventional metrics commonly used in text classification. These include Accuracy, which measures the percentage of correctly classified instances, and, exclusively for the very imbalanced dataset, the Macro F1-score, a metric that calculates the harmonic mean of precision and recall for each class and then averages these values across all classes. \f4.4 Implementation Details For addressing RQ1, our chosen embedding model for RSS and OC is paraphrase-multilingual-mpnet-base-v23. To perform clustering in OC, we employ the HDBSCAN algorithm (Campello et al., 2013). We employ BLEU score (Papineni et al., 2002) as comparison function in LLS. The entire process for LLS and Random is executed identically 10 times, and results are reported as mean values along with confidence intervals. Regarding RQ2, we train models under two distinct configurations to isolate the influence of the training algorithm for few-shot learning. We utilize the HuggingFace Transformers library (Wolf et al., 2020) and employ the following methods: \u2022 FINETUNE: We fine-tune the XLM-Roberta-large (Conneau et al., 2020), a pre-trained encoder-based Language Model, following conventional fine-tuning procedures for Sequence Classification. The training process spans 30 epochs with a learning rate of 2 \u00d7 10\u22125. \u2022 SETFIT: For this method, we utilize Sentence Transform fine-tuning (SetFit) (Tunstall et al., 2022), an efficient approach for few-shot learning in encoder-based models. SetFit dynamically generates training pairs from annotated data and leverages contrastive loss for training the model on the classification task. As the base model, we also use paraphrase-multilingual-mpnet-base-v2. Results in RQ2 for LLS and Random, which exhibit stochastic behavior, are presented in terms of mean values and standard deviations across 10 runs. The experiments are conducted across a range of nshots values, specifically 8, 16, 32, and 64, with a batch size of 16 for the training process. 5 Results We compare the performance of our proposed Informed Data Selection methods with random sampling strategy on the five datasets. 5.1 Efficiency in Human Annotation (RQ1) Charts in Figure 2 show results for experiments where we measure the overannotation rate \u03b8 as 3Available on https://huggingface.co/ sentence-transformers/paraphrase-mpnet-base-v2 AgNews 10 20 30 40 50 60 samples per class 1.1 1.2 1.3 1.4 1.5 1.6 RSS OC LLS Random MSA 10 20 30 40 50 60 samples per class 1.0 1.1 1.2 1.3 1.4 1.5 1.6 RSS OC LLS Random SST5 10 20 30 40 50 60 samples per class 1.0 1.2 1.4 1.6 1.8 2.0 2.2 RSS OC LLS Random Emotion 10 20 30 40 50 60 samples per class 3.0 3.5 4.0 4.5 5.0 5.5 RSS OC LLS Random BRNews 10 20 30 40 50 60 samples per class 8 10 12 14 16 RSS LLS Random Figure 2: Overannotation Rate \u03b8 per Dataset and Method. \fa function of the number of samples per class in the Dataset, for each method (RSS, OC, LLS and Random as baseline). Methods LLS and Random are executed 10 times, and averaged results (along with confidence interval) are shown. In balanced datasets, we observe that no method consistently outperforms the random baseline. This is seen in AgNews and MSA. It can be explained, as we have mentioned before, by the distribution of classes in these datasets: both are heavily balanced, which tend to favor random sampling methods. So, when it comes to balanced data distributions, the human may not worry about overannotation of the random method. It is interesting to note that in MSA, when nshots is in the range of 30 to 60, RSS would indeed be a better choice than random sampling. Also, our methods are slightly more competent in MSA than in AgNews. The language factor may play a minor role here: because our embedding model, although multilingual, was trained on more English than Portuguese data, its embeddings are less tuned to the Portuguese language, which might explain why RSS promotes variety for a longer range of nshots, but eventually converges with most other methods. Aside from this possible model-related factor, language does not seem to be a relevant factor for our selection methods. For imbalanced data distributions, two of our methods consistently outperform random sampling: RSS and OC. We observe a lower overannotation rate \u03b8 in SST5 and Emotion when nshots < 30, indicating that both RSS and OC are a better fit than random sampling in imbalanced distributions. As we increase nshots further from 30, only RSS in the Emotion dataset worsens, but methods are overall are more efficient in choosing which data to annotate, generating less excess of annotations. For a heavily imbalanced distribution, we see a different behavior. We observe that as number of classes and data imbalance grow, overannotation rate \u03b8 increases for every method tested (BRNews has 10 times more overannotation rate than balanced datasets). In turn, OC generates too much overannotation rate (more than 6 times than Random baseline), and is thus considered an outlier and excluded from the chart. Results show that RSS considerably outperforms Random baseline for nshots < 40. This is once again due to the fact that this dataset has a much higher number of classes, with very imbalanced distribution of documents per each class, much closer to a real-life scenario humans find themselves. In these scenarios, our method thrives, generating as few as half excess annotations when compared to the Random method. However, as observed for every dataset, our methods and Random baseline also converge when nshots increases further away from around 50. 5.2 Model Performance (RQ2) Figure 3 shows results for experiments where we compare the performance of classifiers trained with data selected by our methods and the Random method. Because OC fails to generate a feasible excess of annotation for BRNews, it is deemed as not applicable and therefore excluded from reports. As a general result, we observe that our methods OC and LLS fail to consistently outperform the Random baseline. However, RSS outperforms random sampling in almost every scenario. For both FINETUNE and SETFIT, RSS is better than random sampling for every dataset with the exception of the AgNews, where random sampling yields higher accuracy. A mix of many factors may be responsible for this: first, AgNews is balanced, which favors random sampling when selecting training data; second, the task of AgNews is simple when compared to other datasets, because classes in it have distinct traits (ie. they refer to distinct themes, such as Sports, Technology, etc) which may help with decision boundaries of the model. The other balanced dataset, MSA, does not have these distinct traits for its classes, which instead express a kind of gradation (ie. Positive, Neutral, Negative). In other words, the classification task in MSA is tougher, which means that selecting data with more variability can effectively boost model performance. We note that the higher the degree of data imbalance, the more consistently RSS will outperform random sampling. However, reporting only accuracy in a heavily imbalanced dataset is insufficient to adequately represent performance of a classifier. Thus, Table 2 shows results of MacroF1 Score for both training methods in BRNews. We see that for FINETUNE, RSS performs consistently better, while SETFIT also shows a slight improvement when compared to Random, falling above the confidence interval only for nshots = 8. This is an indicative that both RSS and Random methods perform almost equally well across classes, disconsidering imbalance among classes. This indicates that both methods succeed at selecting diverse data for model training. Still, RSS \fAgNews FINETUNE 8 16 32 64 0.78 0.80 0.82 0.84 0.86 0.88 RSS OC LLS random SETFIT 8 16 32 64 0.78 0.80 0.82 0.84 0.86 0.88 RSS OC LLS random SST5 FINETUNE 8 16 32 64 0.30 0.35 0.40 0.45 0.50 RSS OC LLS random SETFIT 8 16 32 64 0.36 0.38 0.40 0.42 0.44 0.46 0.48 RSS OC LLS random Emotion FINETUNE 8 16 32 64 0.3 0.4 0.5 0.6 0.7 RSS OC LLS random SETFIT 8 16 32 64 0.4 0.5 0.6 0.7 0.8 RSS OC LLS random MSA FINETUNE 8 16 32 64 0.40 0.45 0.50 0.55 0.60 0.65 RSS OC LLS random SETFIT 8 16 32 64 0.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 RSS OC LLS random BR News FINETUNE 8 16 32 64 0.64 0.66 0.68 0.70 0.72 0.74 0.76 RSS LLS random SETFIT 8 16 32 64 0.48 0.50 0.52 0.54 0.56 0.58 0.60 0.62 0.64 RSS LLS random Figure 3: Accuracy over evaluation datasets. provides higher accuracy, turning it into the recommended method for low-resource setups. Table 2: FineTune and SetFit F1-Score Macro for Random Selection on BRNews dataset. Training nshots RSS Random FINETUNE 8 58.6 56.7 \u00b1 2.3 FINETUNE 16 62.0 60.6 \u00b1 0.8 FINETUNE 32 65.2 62.8 \u00b1 0.9 FINETUNE 64 66.8 63.6 \u00b1 0.5 SETFIT 8 46.83 45.0 \u00b1 1.7 SETFIT 16 48.8 48.8 \u00b1 1.9 SETFIT 32 52.3 52.2 \u00b1 1.0 SETFIT 64 55.9 55.6 \u00b1 1.2 Another important result is the convergence of all methods when nshots grows. Because our methods are suited to the construction of a very first version of a dataset for Active Learning, both overannotation rate and model performance converge when nshots > 64. A reason is that, as the number of selected data grows, diversity will also grow. Although results show that our selection methods promote more diversity for lower nshots, any selection method that does not apply oversampling will bring diversity if nshots keeps increasing. Thus, other methods outpace ours in promoting diversity when we leave the realm of few-shot \u2013 i.e. when we annotate too much data. This means that when the desire is to annotate lots of data per class, most methods evaluated in this work are not suited to the selection, with random sampling being a better strategy. 6" + }, + { + "url": "http://arxiv.org/abs/2201.01337v3", + "title": "ZeroBERTo: Leveraging Zero-Shot Text Classification by Topic Modeling", + "abstract": "Traditional text classification approaches often require a good amount of\nlabeled data, which is difficult to obtain, especially in restricted domains or\nless widespread languages. This lack of labeled data has led to the rise of\nlow-resource methods, that assume low data availability in natural language\nprocessing. Among them, zero-shot learning stands out, which consists of\nlearning a classifier without any previously labeled data. The best results\nreported with this approach use language models such as Transformers, but fall\ninto two problems: high execution time and inability to handle long texts as\ninput. This paper proposes a new model, ZeroBERTo, which leverages an\nunsupervised clustering step to obtain a compressed data representation before\nthe classification task. We show that ZeroBERTo has better performance for long\ninputs and shorter execution time, outperforming XLM-R by about 12% in the F1\nscore in the FolhaUOL dataset. Keywords: Low-Resource NLP, Unlabeled data,\nZero-Shot Learning, Topic Modeling, Transformers.", + "authors": "Alexandre Alcoforado, Thomas Palmeira Ferraz, Rodrigo Gerber, Enzo Bustos, Andr\u00e9 Seidel Oliveira, Bruno Miguel Veloso, Fabio Levy Siqueira, Anna Helena Reali Costa", + "published": "2022-01-04", + "updated": "2022-06-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction The current success of supervised learning techniques in real-world Natural Language Processing (NLP) applications is undeniable. While these techniques require a good set of labeled data, large corpora of annotated texts are dif\ufb01cult to obtain, as people (sometimes experts) are needed to create manual annotations or revise and correct prede\ufb01ned labels. This problem is even more critical in languages other than English: statistics1 show that English is used by 63.1 % of the population on the internet, while Portuguese, for instance, is only used by 0.7 %. This scenario has contributed to the rise of Low-Resource NLP, which aims to develop techniques to deal with low data availability in a speci\ufb01c language or application domain [Hedderich et al., 2021]. Recently, the concept of zero-shot learning emerged in NLP: a semi-supervised approach in which models can present results equivalent to those of supervised tasks, such as classi\ufb01cation in the absence of labeled data. Current approaches to the zero-shot text classi\ufb01cation task (0SHOT-TC) make use of the good performance that Transformers have demonstrated in text entailment tasks [Yin et al., 2019]. In order to be able to process text in a way that is not uniquely suited to any speci\ufb01c task or data-set, these Transformers are \ufb01rst pre-trained in large general databases (usually taken from Wikipedia) 1Statistics available at: https://w3techs.com/technologies/overview/content_language. Accepted at PROPOR 2022: 15th International Conference on Computational Processing of Portuguese arXiv:2201.01337v3 [cs.CL] 4 Jun 2022 \fand then \ufb01ne-tuned into a small mainstream data-set for the natural language inference task (such as GLUE [Wang et al., 2019] and XNLI [Conneau et al., 2018]). However, the use of models based entirely on Transformers falls into two critical problems: (i) limitation of the maximum size of the input text, and (ii) long run-time for large volumes of data. While there are transformer-based solutions to these problems individually [Beltagy et al., 2020, Sanh et al., 2019, Zaheer et al., 2020], to the best of our knowledge, there is no solution that addresses both, nor even in the context of 0SHOT-TC. In this paper, we propose a new hybrid model that merges Transformers with unsupervised learning, called ZeroBERTo \u2013 Zero-shot BERT based on Topic Modeling \u2013, which is able to classify texts by learning only from unlabeled data. Our contribution not only handles long inputs \u2013 not limiting the input size and considering every input token to encode the data \u2013 but also offers a faster execution time. We propose an experimental setup with unlabeled data, simulating low-resource scenarios where real-life NLP researchers may \ufb01nd themselves. Then, we compare ZeroBERTo to a fullyTransformer-based zero-shot on a categorization dataset in Portuguese, FolhaUOL2. Our results show that our model outperforms the previous one, in the best scenario, with about 12 % better label aware weighted F1-score and around 13 times faster total time. The paper is structured as follows: Sect. 2 presents a background of how it is possible to move from data scarcity to zero-shot learning, as well as the related work on getting the best model for the 0SHOT-TC task. Sect. 3 formalizes the ZeroBERTo task and describe its training and inference procedures. Then, Sect. 4 describes the experimental setup that makes it possible to simulate low-resource scenarios to evaluate the proposed model. Finally, the discussion of the results of the experiments along with our \ufb01nal remarks is in Sect. 5. 2 Background and Related Work The \ufb01rst approach to overcome the shortage of labeled data for classi\ufb01cation suggests adopting data augmentation strategies [Jacobs, 1992], relying on methods to generalize from small sets of already annotated data. Still, the problem persists when there is an extreme lack of data. An alternative approach is to treat the task as a topic modeling problem. Topic modeling is an unsupervised learning technique capable of scanning a set of documents, detecting patterns of words and phrases within them, and automatically clustering groups of similar words and expressions that best characterize a set of documents [Chen et al., 2016]. There is usually a later labeling step for these clusters, which can be a problem as their interpretation is sometimes challenging, and a labeling error can affect the entire process. An automatic method for this is in our interest. The context presented helps explain the growing interest in the \ufb01eld of Low-Resource NLP [Chang et al., 2008, Hedderich et al., 2021], which addresses traditional NLP tasks with the assumption of scarcity of available data. Some approaches to this family of tasks propose semi-supervised methods, such as adding large quantities of unlabeled data to a small labelled dataset [Nigam et al., 2000], or applying cross-lingual annotation transfer learning [Bentivogli et al., 2004] to leverage annotated data available in languages other than the desired one. Other approaches try to eliminate the need for annotated data for training, relying, for example, on pre-trained task-agnostic neural language models [Meng et al., 2020], which may be used as language information sources, as well as representation learning models [Ji and Eisenstein, 2014] for word, sentence or document classi\ufb01cation tasks. Recent breakthroughs in pre-trained neural models have expanded the limits of what can be done with data shortage. The Transformer model [Vaswani et al., 2017], which relies solely on attention mechanisms for learning, followed by BERT [Devlin et al., 2019] \u2013 a pre-trained Transformer encoder capable of deep bidirectional encoding \u2013 offered the possibility of using general-purpose models with previous language understanding. With little to no \ufb01ne-tuning, BERT-like models have been successfully applied to most natural language comprehension tasks [Logeswaran et al., 2020], and also show a signi\ufb01cant reduction in the need for training data [Brown et al., 2020]. Such models tend to work better for the 0SHOT-TC task, as they carry information about the context and semantic attributes within their pre-trained parameters. On the downside, pre-trained Transformers are complex, with millions of trainable parameters and slow processing of large quantities of data, and due to memory issues, most pre-trained Transformers cannot process inputs larger than 512 tokens at a time. 2Available at: https://www.kaggle.com/marlesson/news-of-the-site-folhauol. 2 \fAlso, attention models have another problem related to input size: attention cannot keep track of all information present in a large text, worsening the performance of the models. In this context, zero-shot learning approaches stand out [Socher et al., 2013]. A simple way to explain zero-shot is to compare its paradigm with humans\u2019 ability to recognize classes of objects by having a high-level description of them without seeing an example of the object previously. Yin et al. [2019] de\ufb01nes that 0SHOT-TC aims to learn a classi\ufb01er f : X \u2192Y , whereas classi\ufb01er f(.), however, does not have access to data X speci\ufb01cally labeled with classes from Y . We can use the knowledge that the model already has to learn an intermediate layer of semantic attributes, which is then applied at inference time to recognize unseen classes during the training stages [Zhang et al., 2019]. Several works that seek to improve the performance of zero-shot learning inspired ours. Li et al. [2015] worked in the image domain, seeking to develop a two-stage model that \ufb01rst learns to extract relevant attributes from images automatically and then learns to assign these attributes to labels. Our proposal performs the same two steps for the text classi\ufb01cation problem but does not use any speci\ufb01c knowledge of external data or require any labelled data. In the text domain, Mekala and Shang [2020] de\ufb01nes weak-supervised learning similar to our de\ufb01nition of zero-shot learning. With unlabeled data and a list of classes as inputs, it applies seed word lists to guide an interactive clustering preview. Meng et al. [2020] uses topic mining to \ufb01nd out which words have the same semantic meaning as the proposed labels, and with that makes a \ufb01ne-tuning of the language model assuming the implicit category as the presence of these words in the text. Unlike these approaches, our model does not require the user to have any seed word for the labels, and instead of automatically learning them from the labels themselves, ZeroBERTo discovers them from the input data through topic modeling and then assigns them to the labels based on the language model used. 3 Proposed Method In this section, we introduce ZeroBERTo which leverages Topic Modeling and pre-trained Language Models (LMs) for the task of zero-shot multi-class text classi\ufb01cation (0SHOT-TC). 3.1 0SHOT-TC Task Formalization Given a set of unlabeled documents D = {d1, d2, . . . , dn} and a set of m semantically disjoint and relevant label names L = {l1, l2, . . . , lm}, 0SHOT-TC aims to learn f : D\u00d7L \u2192\u0398, |\u0398| = |L| and \u0398 de\ufb01nes a probability \u03b8i j \u2208[0, 1] for each label lj being the label for di [Yin et al., 2019]. A single-label classi\ufb01cation of a document di may then be carried out as lj \u2208L | j = argmax(j)(\u03b8i 1, \u03b8i 2, . . . , \u03b8i m) \u2013 as a notation simpli\ufb01cation, for now on, we mention this as argmax(l\u2208L)(\u0398i). Standard approaches to the 0SHOT-TC task treat it as a Recognizing Textual Entailment (RTE) problem: given two documents d1, d2, we say \u201cd1 entails d2\" (d1 \u21d2d2) if a human reading d1 would be justi\ufb01ed in inferring the proposition expressed by d2 (named hypothesis) from the proposition expressed by d1 [Korman et al., 2018]. In the case of 0SHOT-TC, d2 is the hypothesis H(lj), which is simply a sentence that expresses an association to lj. For example, in a news categorization problem, a label could be \u201csports\" and a hypothesis for it could be \u201cThis news is about sports\". Creating the hypothesis is essential to make it understandable by a Language Model, and allows us to discover the probability P(lj|di) = P(di \u21d2H(lj)), as P(di \u21d2H(lj)) can easily be inferred by a LM, using di and H(lj) as inputs. For the zero-shot text classi\ufb01cation task, it calculates the textual entailment probability of each possible label. This inference, however, is quite demanding computationally. 3.2 ZeroBERTo ZeroBERTo works differently: instead of processing the entire document in the LM, it learns a compressed data representation in an unsupervised way and only processes this representation in the LM. Thus, it is possible to obtain better performance with large inputs and shorter total time than the standard model, even considering the training time added by the unsupervised step. To learn this representation, ZeroBERTo uses a statistical model, named Topic Modeling (TM), which examines documents and discovers, from the statistics of the words that occur in them, which abstract \u201ctopics\u201d are covered, discovering hidden semantic structures in a text body. Given a set 3 \fof unlabeled documents D, TM aims at learning a set of topics T . A topic t \u2208T is a list of q words or n-grams that are characteristic of a cluster but not of the entire documents set. Then, TM also learns how to represent any document di \u2208D as a composition of topics expressed by \u2126T M(di) = (\u03c9i 1, \u03c9i 2, . . . , \u03c9i k), such that \u03c9i k denotes the probability of a document di belonging to a topic tk. With this in place, instead of analyzing the relation between document di and label lj, we determine the entailment between the learned topic representation \u2126T M(di) of each document and each label lj. Topics found are given as input to the LM, as a text list of words/expressions that represent the topic, in order to infer entailment probabilities. If the topic representation was learnt properly, then we can assume independence between lj and di given a topic tk, therefore P(lj|tk, di) = P(lj|tk) = P(tk \u21d2H(lj)). We then solve the 0SHOT-TC task by calculating the compound conditional probability \u03b8j i = P(lj|di) = X tk\u2208T P(lj|tk) \u2217P(tk|di) = X tk\u2208T P(tk \u21d2H(lj)) \u2217\u2126k T M(di) (1) for each label lj to determine \u0398i = (\u03b81 i , \u03b82 i , . . . , \u03b8m i ). Classi\ufb01cation is then carried out by selecting argmax(l\u2208L)(\u0398i). Algorithm 1: Given a set of documents D, a set of labels L, a hypotesis template H, a topic model TM and a Language model LM as input, ZeroBERTo-training (see Alg. 1) returns a trained model Z. For that, it trains TM on D using TM.FIT (line 2), that learns the topic representation of those documents. Then, it applies LM.PREDICT for all topics learned in TM (lines 4 to 7). This function, given a topic tk, returns the set of probabilities P(tk \u21d2H(lj)) for all lj \u2208L. In the end, the model Z gathers all information learned from D. Algorithm 2: ZeroBERTo-prediction leverages a trained model Z and a speci\ufb01c document di to return the predicted label l \u2208L (see Alg. 2). For this, it uses Z.TM.TOPICENCODER (line 1), that returns the topic representation \u2126T M(di) of di. This was learned by Z.TM in Alg. 1. Then, it calculates the equation (1) for all candidate labels (lines 2 to 8), returning the one with maximum probability. Algorithm 1 ZeroBERTo-training Require: D, L, H, TM, LM Ensure: Z 1: create Z \u25b7Instantiate ZeroBERTo 2: TM.FIT(D) \u25b7Topic Model Training 3: P \u2190{} 4: for each tk \u2208TM.topics do 5: pk \u2190LM.predict(tk, H(L)) 6: P \u2190P \u222a{pk} 7: end for 8: Z.TM \u2190TM 9: Z.P, Z.L \u2190P, L 10: return Z Algorithm 2 ZeroBERTo-prediction Require: Z, di Ensure: l 1: \u2126i T M \u2190Z.TM.TOPICENCODER(di) 2: \u0398i \u2190{} 3: for each lj \u2208Z.L do 4: \u03b8i j \u21900 5: for each tk \u2208Z.TM.topics do 6: \u03b8i j \u2190\u03b8i j + (P(tk) \u2217\u2126i T M(tk)) 7: end for 8: \u0398i \u2190\u0398i \u222a{\u03b8i j} 9: end for 10: return argmax(l\u2208L)(\u0398i) 4 Experiments In this section, we present the experiments performed to validate the effectiveness of ZeroBERTo. Considering that it would be dif\ufb01cult to evaluate our model in a real low-resource scenario, we propose an experimental setup to simulate low-resource situations in labeled datasets. We compare ZeroBERTo with the XLM-R Transformer, \ufb01ne-tuned only on the textual entailment task. To perform the unsupervised training and evaluation, we use FolhaUOL dataset3. 4.1 Dataset The FolhaUOL dataset is from the Brazilian newspaper \u201cFolha de S\u00e3o Paulo\u201d and consists of 167,053 news items labeled into journal sections (categories) from January 2015 to September 2017. 3Available at: https://www.kaggle.com/marlesson/news-of-the-site-folhauol. 4 \fTable 1: Number of articles by news category within FolhaUOL dataset after cleaning and organizing the data. Category # of articles Category # of articles Poder e Pol\u00edtica 22022 Educa\u00e7\u00e3o 2118 Mercado 20970 Turismo 1903 Esporte 19730 Ci\u00eancia 1335 Not\u00edcias dos Pa\u00edses 17130 Equil\u00edbrio e Sa\u00fade 1312 Tecnologia 2260 Comida 828 TV, Televis\u00e3o e Entretenimento 2123 Meio Ambiente 491 Categories too broad, that do not have a semantic meaning associated with a speci\ufb01c context (as the case of \u201ceditorial\" and \u201copinion\"), were removed from the dataset keeping only the categories presented in Table 1. For each news article, we take the concatenation of its title and content as input. Table 1 presents the data distribution by category. 4.2 Models We compare our proposal to the XLM-R model. XLM-R is the short name for XLM-RoBERTa-large-XNLI, available on Hugging Face4, which is state of the art in Multilingual 0SHOT-TC. It is built from XLM-RoBERTa [Conneau et al., 2020] pre-trained in 100 different languages (Portuguese among them), and then \ufb01ne-tuned in the XNLI [Conneau et al., 2018] and MNLI [Williams et al., 2018] datasets (which do not include the Portuguese language). It is already in the zero-shot learning con\ufb01guration described by Yin et al. [2019] with template hypothesis as input. The template hypothesis used was \u201cO tema principal desta not\u00edcia \u00e9 {}\u201d and texts larger than the maximum size of XLM-R (512 tokens) are truncated. ZeroBERTo The implementation of our model here makes use of BERTopic [Grootendorst, 2020] with M-BERT-large (Multilingual BERT) [Devlin et al., 2019] as topic modeling step, and the same XLM-R described above as the Transformer for associating the topic representation of each document to labels. Repeating the use of XLM-R seeks to make the comparison fair. BERTopic\u2019s hyperparameters are: interval n for n-grams to be considered in the topic representation (n_grams_range \u2208{1, 2, 3}); number of representative words/ngrams per topic (top_n_words = 20); and minimum amount of data in each valid topic (min_topic_size = 10). The XLM-R template hypothesis used is \u201cO tema principal desta lista de palavras \u00e9 {}\u201d. 4.3 Evaluation To simulate real-world scenarios, we propose a variation of strati\ufb01ed k-fold cross-validation [Refaeilzadeh et al., 2009]. First, we split the data into k disjoint strati\ufb01ed folds, i.e. the data were evenly distributed in such a way as to make the distribution of the classes in the k folds follow the distribution in the entire dataset. Next, we use these k-folds to perform the following 4 experiment setups: Exp. 1 Labeling a dataset: Simulates a situation where one needs to obtain the \ufb01rst labeling of a dataset. ZeroBERTo is trained in (k \u22121) folds and has the performance compared to XLM-R in the same (k \u22121) folds, in order to assess its ability to label data already seen. Since this is unsupervised learning, evaluating the model\u2019s labeling ability in the training data makes sense as it was not exposed to the ground truth labels. Exp. 2 Building a model for additional inferences: Simulates a situation where the researcher wants to create a current model in a real-life application without having data labeled for it. ZeroBERTo is trained in (k \u22121) folds and can infer new data compared to XLM-R on the remaining fold. 4Available at: https://huggingface.co/joeddav/xlm-roberta-large-xnli 5 \fTable 2: Table shows the results of the experiments for the FolhaUOL dataset. P is weighted-average Precision, R is weighted-average Recall, and F1 is weighted-average F1-score. Exp. 1 Exp. 2 Exp. 3 Exp. 4 XLM-R ZeroBERTo XLM-R ZeroBERTo XLM-R ZeroBERTo XLM-R ZeroBERTo P 0.47 \u00b1 0.00 0.66 \u00b1 0.01 0.46 \u00b1 0.01 0.16 \u00b1 0.08 0.46 \u00b1 0.01 0.64 \u00b1 0.01 0.47 \u00b1 0.00 0.29 \u00b1 0.17 R 0.43 \u00b1 0.00 0.54 \u00b1 0.01 0.43 \u00b1 0.00 0.21 \u00b1 0.05 0.43 \u00b1 0.00 0.56 \u00b1 0.02 0.43 \u00b1 0.00 0.31 \u00b1 0.12 F1 0.43 \u00b1 0.00 0.54 \u00b1 0.01 0.42 \u00b1 0.01 0.15 \u00b1 0.07 0.42 \u00b1 0.01 0.52 \u00b1 0.02 0.43 \u00b1 0.00 0.19 \u00b1 0.17 Time 61h30min 9h21min 15h22min 6h20min 15h22min 1h10min 61h30min 2h25min Figure 1: Figure shows text entailment results between topics (X-axis) and labels (Y-axis) for the \ufb01rst 50 Topics (sorted by size) in fold 0 from Experiment 3. In total, 213 topics were generated in this experiment. Exp. 3 Labeling a smaller dataset: Simulates situation of scarcity of data in which, besides not having labeled data, little data is present. ZeroBERTo is trained in one fold and compared to XLM-R in the same fold. Considering the topic-representation learning stage, the presence of little data could be a bottleneck for ZeroBERTo since the topic representation may not be properly learned. Exp. 4 Building model for additional inferences but with a scarcity of training data: simulates again how the model would behave in a real-life application with few training data. ZeroBERTo is trained in 1 fold and compared to XLM-R in the remaining k \u22121 folds. We evaluated the performance of both models for each experiment with the following label-aware metrics: weighted-average Precision (P), weighted-average Recall (R), and weighted-average F1score (F1). For the k-fold CV, we use k = 5. Experiments were run on an Intel Xeon E5-2686 v4 2.3GHz 61 GiB CPU and an NVIDIA Tesla K80 12 GiB GPU using the PyTorch framework. To run XLM-R, we use batches sized 20 to prevent GPU memory over\ufb02ow. 4.4 Results Table 2 shows the results of the proposed experiments. Time for ZeroBERTo considers unsupervised training time and inference time. Further, as no training is required, only a single run of XLM-R was done on all data. Thus, the times for XLM-R are estimated. Nevertheless, in all experiments, the total time (training + execution) of ZeroBERTo was much lower than the execution time of XLM-R. Our model surpassed XLM-R in all metrics in the experiments in which the evaluation was performed on the data used in the unsupervised training (Exp. 1 and 3). Figure 1 presents a visualization for the entailment mechanism between topics and labels represented by term P(lj|tk) = P(tk \u21d2H(lj)) in equation (1). The darker the green, the greater the conditional odds. 5 Discussion and Future Work The experiments simulated low-resource scenarios where a zero-shot text classi\ufb01er can be useful. The results showed that it is possible to obtain a better performance in the 0SHOT-TC task with the addition of an unsupervised learning step that allows a simpli\ufb01ed representation of the data, as proposed by ZeroBERTo. Moreover, the proposed model presents itself as an excellent tool to help researchers deal with low-resource scenarios, such as the need to label an entire dataset without any previously labeled training data. Another interesting feature is that the model showed evidence of robustness for smaller amounts of data. In experiment 3, it was trained with 25 % of the data from 6 \fexperiment 1 and got similar performance metrics in lower time, refuting our concern that little data could be a bottleneck in the model. However, for con\ufb01gurations where ZeroBERTo was tested simulating real-life applications (Exp. 2 and 4), being exposed to new data, the performance was worse than XLM-R. The results suggest it occurs due to the inability of the embedded topic model to adequately represent new data as a composite of previously learned topics, over\ufb01tting training data. This is clear from observing the high variance of the metrics among the k-folds. It allows us to conclude that, for now, the scenarios presented in experiments 1 and 3 are more suitable for using the model. We have 0.54 of F1-score in the best scenario regarding the metrics obtained. Despite being a positive result considering that it is a multi-class classi\ufb01cation, there is much room for improvement. The main reason to be pointed out for low performances is the use of multilingual models that were not \ufb01ne-tuned in the Portuguese language, which is quite impressive. A critical remark to be made is concerning the memory and time trade-off. For example, ZeroBERTo was more than 10x faster than XLM-R in Exp. 3. However, the topic model used by ZeroBERTo bases its clustering on the HDBSCAN method, which reduces time taken for data processing but increases the need for memory [McInnes and Healy, 2017], which XLM-R does not do. As the size of input data grows, processing may become unfeasible. XLM-R, on the other hand, does not use any interaction between data and can be processed in parallel and distributed without any negative effect on the \ufb01nal result. It should be noted, however, that ZeroBERTo does not depend on BERTopic and can use other Topic Modeling techniques that address this issue more adequately in other scenarios. A signi\ufb01cant dif\ufb01culty of this work was that, as far as the authors are aware of, there are no large benchmark datasets for multi-class text classi\ufb01cation in Portuguese, nor general use datasets with semantically meaningful labels. In this sense, some future work directions involve the production of benchmark datasets for Portuguese text classi\ufb01cation (and 0SHOT-TC). It would also be interesting to produce Natural Language Inference datasets in Portuguese, which could, in addition to the existing ones [Fonseca et al., 2016, Real et al., 2020], enable \ufb01ne-tuning of Transformers 100 % in Portuguese. Then, it would be possible to compare the performance of the models using BERTimbau (BERTPortuguese) [Souza et al., 2020] both in clustering and classifying. It would also be worthwhile to test the proposed model in other domains: to name one, legislative data present similar challenges [Ferraz et al., 2021]. Another interesting future work would be to enable ZeroBERTo to deal with multi-label classi\ufb01cation, where each document can have none, one or several labels. Acknowledgments This research was supported in part by Ita\u00fa Unibanco S.A., with the scholarship program of Programa de Bolsas Ita\u00fa (PBI), and by the Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior (CAPES), Finance Code 001, CNPQ (grant 310085/2020-9), and USP-IBM-FAPESP Center for Arti\ufb01cial Intelligence (FAPESP grant 2019/07665-4), Brazil. Any opinions, \ufb01ndings, and conclusions expressed in this manuscript are those of the authors and do not necessarily re\ufb02ect the views, of\ufb01cial policy, or position of the \ufb01nanciers." + } + ], + "Laurent Besacier": [ + { + "url": "http://arxiv.org/abs/2210.11835v2", + "title": "A Textless Metric for Speech-to-Speech Comparison", + "abstract": "In this paper, we introduce a new and simple method for comparing speech\nutterances without relying on text transcripts. Our speech-to-speech comparison\nmetric utilizes state-of-the-art speech2unit encoders like HuBERT to convert\nspeech utterances into discrete acoustic units. We then propose a simple and\neasily replicable neural architecture that learns a speech-based metric that\nclosely corresponds to its text-based counterpart. This textless metric has\nnumerous potential applications, including evaluating speech-to-speech\ntranslation for oral languages, languages without dependable ASR systems, or to\navoid the need for ASR transcription altogether. This paper also shows that for\nspeech-to-speech translation evaluation, ASR-BLEU (which consists in\nautomatically transcribing both speech hypothesis and reference and compute\nsentence-level BLEU between transcripts) is a poor proxy to real text-BLEU even\nwhen ASR system is strong.", + "authors": "Laurent Besacier, Swen Ribeiro, Olivier Galibert, Ioan Calapodescu", + "published": "2022-10-21", + "updated": "2023-07-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "main_content": "Introduction In natural language processing (NLP), matching a text hypothesis with a text reference is a common practice to evaluate systems such as natural language generation, machine translation, etc. With the rise of speech generation and end-to-end speechto-speech (S2S) translation systems [1, 2], there is a growing need for speech-to-speech comparison directly in the signal domain [3]. This paper proposes a simple and efficient implementation of such \u201dtextless\u201d metric. More specifically, we want to develop a metric in order to compare a speech hypothesis (H) with a speech reference (R) along several axes. In this work, our main axis is meaning, i.e similarity score should be high if both utterances convey same message. But other axes could be interesting in the future: eg. high similarity if H and R voices are similar (similar speaker, gender, etc.). We want our textless metric to have a strong correlation with its text-based counterpart that would be applied to the transcripts of H and R (see figure 1). We believe such metric could be interesting for following use cases: (a) evaluating a S2S translation system w/o falling back to a transcription of H and R (unlike sentencelevel ASR-BLEU [1] does); (b) evaluating target languages for which we cannot fall back to a transcription such as Tamasheq [4] (>50% of languages are oral; even more are not equipped with good ASR) and (c) defining training objective for end-toend S2S model optimization. This paper is structured as follows: section 2 positions our contribution in respect to previous works. Section 3 describes a naive approach that fails and supports our proposal of a learnt metric. Section 4 presents our textless metric while section 5 illustrates its use through speech comparison experiments with synthetic and natural speech. Finally section 6 concludes this work and gives some perspectives. Figure 1: Our objective is to create a metric based on speech that strongly correlates with metrics based on text. 2. Background 2.1. Text-to-Text Comparison Metrics String comparison in automatic speech recognition (ASR) evaluation often utilizes word error rate (WER). However, more specific metrics for machine translation evaluation such as BLEU [5] and TER (Translation Edit Rate) [6] have been proposed to go beyond simply counting insertions/deletions/substitutions. Later, ChrF (character-level Fscore) [7] was proposed to address several shortcomings of BLEU such as sensitivity to tokenization. Finally since 2020 the use of contextualized text representations was experimented for evaluation such as in BERTScore [8]. 2.2. Learnt Metrics The common feature of metrics described in the previous paragraph is that they are unsupervised: they compare two sequences of tokens and the hope is that the obtained scores will correlate well with human judgements (manually obtained quality scores). On the other hand, learnt metrics such as BEER [9] and COMET [10] are specifically trained to correlate with human judgments. Actually, COMET1 is more than a single metric and should rather be seen as an open-source framework for machine translation evaluation that can be used to: (a) evaluate systems with pre-trained metrics, and (b) train and develop new metrics. We will adapt COMET to speech inputs in this work. 2.3. Speech Comparison and Textless NLP Speech-to-speech comparison is not an unexplored territory. The first approaches for isolated word recognition from speech used Dynamic Time Warping (DTW) [11] to measure a distance directly between two speech signals. DTW aligns two sequences of feature vectors by warping their time axes to achieve an optimal match. However, even if DTW is still used for word 1https://unbabel.github.io/COMET/ arXiv:2210.11835v2 [cs.CL] 20 Jul 2023 \fspotting applications, it is ill-equipped to reach our goal of measuring subtle differences in meaning between long speech hypothesis and reference. These limitations of signal-based comparison metrics such as DTW lead us to get interested by the textless NLP area [12]. One building block of this emerging domain is the use of Speech-to-Units (S2U) encoders that automatically discover discrete acoustic units and decode speech into a pseudo-text. Examples of such encoders are HuBERT [13] or Wav2Vec2.0 [14] followed by a quantization function (using k-means algorithm for instance). Such representations were successfully used for automatic speech recognition (ASR) tasks which shows that, whether discretized or not, they convey information related to text message hidden in speech signal. Contemporaneous to this work, [15] propose a text-free metric for S2S evaluation but they train it on human annotations (to correlate with human judgements) whereas we will train our S2S evaluation metric to correlate with its text-based counterpart (which will allow us to take advantage of much more data as no human evaluation data is needed in our case). 3. A (Too) Na\u00a8 \u0131ve Approach Our first attempt was to apply text-based translation metrics to our pseudo-transcribed (S2U) speech signals. Discrete acoustic units were generated after clustering audio features; standard machine translation metrics such as BLEU were then applied to the unit sequences obtained. We then verified if speech-BLEU would correlate with text-BLEU when applied to multiple pairs (H, R) of utterances. We applied the following experimental setup: (a) build a dictionary of k centro\u00a8 \u0131ds from large speech data using k-means algorithm applied to cepstral features; (b) pseudo-transcribe the pairs of speech utterances by mapping each feature vector to the nearest centro\u00a8 \u0131d (using l2 or cosine distance); (c) reduce consecutive repetitions of the same discrete symbol into one instance (de-duplication). We computed speech-based metrics (and their text-based counterpart) on a subset of commonvoice4.0 english dataset.2 More precisely, we selected 20M pairs of utterances with at least one 4-gram in common (in order to have non-zero BLEU scores in our collection). Figure 2 presents a scatterplot of the text-based metric (on X axis) and speech-based metric (on Y axis) for a subset of those pairs when a vocabulary of k=50 acoustic units is used (we experimented with different values of k and a cosine metric instead of l2 with similar results). Figure 2: Scatterplots of text-BLEU (X axis) versus speechBLEU (Y axis) for a vocabulary of 50 acoustic units and l2 distance to the centro\u00a8 \u0131d. We observe no correlation between text-based and speech-based metric with this na\u00a8 \u0131ve approach. Initially, we noted that our selection procedure, which in2https://commonvoice.mozilla.org/ volves choosing pairs of natural speech utterances that share at least one 4-gram, enables the collection of pairs with a wide range of BLEU scores between 0 and 1, reflecting their varying degrees of closeness. However, naive speech-BLEU does not correlate well with text-BLEU which shows that text-based metrics simply applied to discrete speech units fail. This leads us to propose a new approach that differs in two main points: \u2022 instead of local acoustic (cepstral) quantized units, we use HuBERT [13] units that have been shown to convey more contextualized and semantic speech information, \u2022 simple unsupervised (such as BLEU) metrics used are replaced by learnt metrics. 4. A Learnt Metric for Speech-to-Speech Comparison As it has been observed that text-based metrics applied to discretized acoustic speech units are unreliable, we believe that the best approach is to develop a metric that learns the semantic similarity between a speech hypothesis and a speech reference. To experiment with this idea, we re-use COMET [10] framework widely adopted in machine translation evaluation where it is trained to correlate with human judgements. We adapt it to our need as illustrated in figure 3: both audio H and R are pseudo-transcribed in a sequence of de-duplicated speech units (with HuBERT [13]). Both sequences of discrete units are mapped to a sequence of characters3 and encoded with a neural text encoder. Obtained b H and b R vectors are pooled and a regression layer predicts the score we want to approximate. In the follow-up experiments we use ChrF or BLEU textbased metrics as a target. Mean Square Error (MSE) loss is used to train model parameters. As done in initial COMET framework, not only regression layer parameters will be learnt during training but also parameters of the \u201dtext\u201d encoder (after 30% of the first epoch and for the rest of the training steps). We highlight below main differences between initial COMET and its adaptation to speech-to-speech comparison: \u2022 COMET uses source S, hypothesis H and reference R utterances to predict MT quality, whereas we only use H and R in this work (S is ignored during pooling operation), \u2022 COMET predicts human judgement of MT quality whereas our learnt metric predicts a text-based score (no human judgements are needed to train our metric), \u2022 COMET proposes two different training objectives: regression to predict a score or ranking using a triplet loss; we use only regression here, \u2022 COMET comes with several text encoders (XLM-Roberta [16], BERT [17]); we use XLM-Roberta (277M param.) to encode our sequence of discrete acoustic units as we believe it should be able to capture sequential patterns; XLMRoberta parameters are fine-tuned after 30% of first epoch.4 5. Experiments In order to show that our approach can learn several metrics, we first experiment on English synthetic speech and train a metric to predict speech-ChrF. In a second step we predict speechBLEU using English natural (human) speech. 3Each discrete unit is mapped to a rare character in the Unicode set 4Using a true speech encoder such as XLSR [18] to replace the stacking of HuBERT and XLM-Roberta is an option left for future work as it would require major modification of COMET codebase. \fFigure 3: Adaptation of the COMET framework to our textless metric. 5.1. ChrF prediction on synthetic speech We start from synthetic CVSS speech corpus [19] (English target part), a massively multilingual-to-English S2S corpus. To obtain dissimilar audios with different voices, we enrich CVSS using the following process applied to each English speech utterance: (a) ASR transcription; (b) BART [20] encoding and decoding to further add noise to the already noisy ASR transcript; and (c) TTS from the noisy transcript (with a different speaker voice). We end-up with a corpus of 256,882 pairs (H,R) of speech utterances (similar, slightly dissimilar or very dissimilar) with associated transcripts splitted into train (207,364 pairs), dev (14,759 pairs) and test (14,759 pairs). True text-ChrF distribution (on our test set) is displayed in figure 4 (left). We learn several metrics using COMET and display correlations (Pearson and Spearman) between true ChrF and learnt ChrF for different setups (table 1): \u2022 different input (text or speech), \u2022 different amount of training data (for learnt metrics): dev set (14.7k utterances) or train set (207.4k utterances), \u2022 different number of HuBERT acoustic units: 50 or 200, \u2022 different number of training epochs: 5 or 10. First row in table 1 is a topline were ChrF was learnt (using our dev set) with initial COMET framework and text inputs. As expected, neural architecture can learn to approximate a sequence based metric such as ChrF easily (high correlation between true and predicted ChrF scores). Remaining rows use speech H and R inputs: second row is the naive baseline presented in section 3 with poor correlation scores. Rows 3-6 display results obtained with our learnt textless metric (speechChrF). We observe that more acoustic units (200 instead of 50), adding training data (207k utterances instead of 14.7k utterances) and training longer (10 epochs instead of 5) improves correlation. To illustrate better what 0.779 Pearson correlation score means, figure 4 (right) displays distribution of our speechChrF scores (with best configuration of last row in table 1). We observe that left (text-ChrF) and right (speech-ChrF) distributions are very similar. Our modified COMET has learnt to replicate the text-ChrF distribution using speech input only. 5.2. BLEU prediction on natural speech 5.2.1. Setup We now evaluate on natural speech. We use 1M pairs obtained with methodology described in section 3 (commonvoice corFigure 4: Distributions of ChrF scores on the test set of prepared CVSS corpus: (left) text-ChrF (right) speech-ChrF. pus) where H and R are natural speech utterances most of the time from different speakers. Target score is now BLEU metric obtained from text. Our corpus is splitted into train (990k pairs), dev (5k pairs) and test (5k pairs). Figure 5 (left) displays BLEU distribution of our test set (train/dev distributions are similar): distribution is bimodal with many unmatched pairs in range [0;0.4] and even more matched pairs in range [0.8;1.0]. Figure 5: Distributions of BLEU on natural speech (test): (left) text-BLEU; (right) speech-BLEU; train/dev have close distrib. After extracting HuBERT-200 acoustic units for the full speech collection, we learn our speech-BLEU on the train set of 990,000 pairs of speech utterances (for 5 epochs) and evaluate on our dev and test (5k pairs each). Overall our model has 279M trainable parameters (among them 277M for XLMR) and was learnt in 60h on a single GPU-V100. Training loss is displayed on figure 6: we clearly see when parameters of the XLM-R encoder (after 20k steps corresponding to 30% of the first epoch) start to be adapted in addition to the regression layer parameters. At this moment XLM-R specializes itself at encoding acoustic HuBERT units and loss significantly decreases. We obtain very good correlations on the test set (see table \fTable 1: Correlations between text-ChrF and speech-ChrF (on synthetic speech, test set) for different experimental setups. Input Train Data Encoder Epochs Metric \u03c1 (Pearson) \u03c1 (Spearman) Text (topline) 14.7k utt. XLM-R 5 learnt chrF 0.902 0.922 Speech (baseline) None Hubert-50 naive chrF 0.431 0.386 Speech dev (14.7k) Hubert-50 +XLM-R 5 learnt chrF 0.542 0.480 Speech dev (14.7k) Hubert-200 +XLM-R 5 learnt chrF 0.595 0.567 Speech train (207.4k) Hubert-200 +XLM-R 5 learnt chrF 0.755 0.700 Speech train (207.4k) Hubert-200 +XLM-R 10 learnt chrF 0.779 0.733 Table 2: Correlations between (a) text-BLEU and speech-BLEU (b) text-BLEU and sentence-level ASR-BLEU (natural speech) metric speech-BLEU (ours) ASR-BLEU [1] (whisper tiny 28.8% WER) ASR-BLEU [1] (whisper large 10.1% WER) \u03c1 (Spearman) \u03c1 (Pearson) \u03c1 (Spearman) \u03c1 (Pearson) \u03c1 (Spearman) \u03c1 (Pearson) dev 0.838 0.988 0.531 0.528 0.784 0.822 test 0.881 0.976 0.579 0.593 0.771 0.805 Figure 6: Loss (MSE) during 5 epochs of training speech-BLEU on 990,000 pairs of natural speech utterances. 2): \u03c1(Pearson) = 0.976 and \u03c1(Spearman) = 0.881. This demonstrates that our approach, when learnt on enough pairs of natural speech, can be used to train a similarity metric (such as BLEU) between two audio speech samples. The high correlation coefficients (actually higher than the ones obtained on synthetic speech) can be explained by the fact that our training data is bigger for natural speech (1M pairs) than for synthetic speech (207k pairs) and also by the BLEU distribution of our dataset (figure 5) which is bimodal with large majority of high scores ([0.8,1.0]) making score prediction task probably easier. 5.2.2. Comparison with sentence-level ASR-BLEU We compare our approach with sentence-level ASR-BLEU [1] which consists in automatically transcribing both speech hypothesis and reference and compute sentence-level BLEU between transcripts.5 We used two multilingual ASR [21] models with different performance (whisper-tiny/39M parameters and whisper-large/1550M parameters) to decode signals of dev and test sets (we keep punctuation and case for ASR-BLEU computation). Right part of table 2 shows that sentence-level ASRBLEU is a poor proxy to real text-BLEU even when ASR system is strong (whisper-large obtains 10.1% WER on common voice data according to [21]). When ASR system is weaker (whisper-tiny obtains 28.8% WER on common voice data according to [21]) correlation scores are even worse. This indi5We however apply ASR on both H and R while [1] applies it to H only and use pre-existing text reference for R. cates that in situations where the target language lacks a robust ASR system, relying solely on ASR-BLEU could be misleading. Conversely, our trained speech-BLEU metric exhibits the strongest correlation with the original text-BLEU. 5.2.3. Qualitative analysis Figure 5 (right) displays speech-BLEU distribution of our test set which is similar to the original text-BLEU distribution which confirms, for natural speech, results of figure 4 already found for synthetic speech.6 As supplementary multimedia material, we offer audio pairs with their text-BLEU, speech-BLEU, and ASR-BLEU scores for randomly selected utterances from the test set. Through our observations, we found that our speechBLEU metric is capable of predicting low scores for poorly related utterances, while also indicating a score close to 1 for similar utterances spoken by different speakers. 6." + } + ], + "Fethi Bougares": [ + { + "url": "http://arxiv.org/abs/2212.05479v1", + "title": "End-to-End Speech Translation of Arabic to English Broadcast News", + "abstract": "Speech translation (ST) is the task of directly translating acoustic speech\nsignals in a source language into text in a foreign language. ST task has been\naddressed, for a long time, using a pipeline approach with two modules : first\nan Automatic Speech Recognition (ASR) in the source language followed by a\ntext-to-text Machine translation (MT). In the past few years, we have seen a\nparadigm shift towards the end-to-end approaches using sequence-to-sequence\ndeep neural network models. This paper presents our efforts towards the\ndevelopment of the first Broadcast News end-to-end Arabic to English speech\ntranslation system. Starting from independent ASR and MT LDC releases, we were\nable to identify about 92 hours of Arabic audio recordings for which the manual\ntranscription was also translated into English at the segment level. These data\nwas used to train and compare pipeline and end-to-end speech translation\nsystems under multiple scenarios including transfer learning and data\naugmentation techniques.", + "authors": "Fethi Bougares, Salim Jouili", + "published": "2022-12-11", + "updated": "2022-12-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction End-to-end approach to speech translation is gradually replacing the cascaded approach which consists of transcribing the speech inputs with an ASR system, and translating the obtained transcription using a text-to-text MT system. For instance, and for the \ufb01rst time, the winning system in the IWSLT 2020 TED English-to-German speech translation shared task was an end-to-end system (Ansari et al., 2020). Despite this positive result, the end-to-end approach is used on a limited scale due to the lack of labeled data. Indeed, data scarcity is today the major blocker for the widespread adoption of the end-to-end models. Taking this into consideration, recent works have focused on developing speech translation corpora. Joint efforts in this direction have allowed us to collect a signi\ufb01cant quantity and good quality of speech translation corpora. Not surprisingly, speech translation corpus development has started for well-resourced languages including English and some European languages. In (Kocabiyikoglu et al., 2018), the 236 hours English\u2192French ST Augmented LibriSpeech were released. Shortly after, (Di Gangi et al., 2019) released the MUST-C corpus including few hundreds (385 to 504 hours) of parallel ST data of TED talks translations from English to eight European languages. At the same time, the EuroparlST (Iranzo-S\u00e1nchez et al., 2019) was released with translations between 6 European languages, with a total of 30 translation directions. While all the previous resource development effort has focused on well-resourced languages, the most recent published corpora CoVoST (Wang et al., 2020a) and CoVoST2 (Wang et al., 2020b). These latter works released a large-scale Multilingual Speech-To-Text Translation Corpus covering translations from 21 languages to English and from English to 15 languages. Although the Arabic-English is one of the language pairs covered by the CoVoST2 corpus, the authors consider it as a low-resource pair. In fact, CoVoST2 corpus contained only 6 hours of prepared speech uniformly splitted between train, dev and test sets. In this paper, we conduct a series of experiments to present the \ufb01rst results of Arabic to English End-to-End Broadcast News Speech translation. This paper is organized as follows: section 2 presents Arabic-to-English speech translation related works. Section 3 gives details about the source of our training data and the method we have used to extract theses data. In section 4, we present our experimental setup as well as the used toolkits to train ou models. Sections 5 and 6 provides details about the pipeline and end-to-end speech translation systems, respectively. Section 7, gives a brief discussion and results analysis. Finally, section 8 concludes the \ufb01ndings of this study and discusses future work. arXiv:2212.05479v1 [cs.CL] 11 Dec 2022 \f2 Related works Arabic-English (AR-EN) is one of the most studied language pair in the context of Speech Recognition and Machine Translation. For instance, this language pair was integrated in several evaluation campaigns and projects including the International Workshop on Spoken Language Translation (IWSLT) and DARPA\u2019s Babylon project. These earlier projects have built on the traditional pipelined architecture integrating speech recognition system in the source language followed by machine translation from the transcript to the target language. In IWSLT, the speech translation task was introduced for the \ufb01rst time in 2006. The IWSLT06 (Paul, 2006) translation campaign was carried using either the manual or the automatic transcription of speech input in the travel domain. This translation task was renewed for several years using always the pipeline approach. Pipeline architecture was also used by BBN in the context Babylon project (Stallard et al., 2011). They developed the TransTalk system including a pipeline of ASR and MT systems in both directions (AR\u2194EN). More recently, but still with the same approach, QCRI presented their live Arabicto-English speech translation system in (Dalvi et al., 2017). The system is a pipeline of a Kaldibased Speech Recognition followed by a Phrasebased/Neural MT system. Recently, there has been a shift to the most recent end-to-end approach without the intermediate step of transcribing the source language. Indeed, IWSLT 2018 was the \ufb01rst time where organizers drooped the ASR task and participants needed to develop an end-to-end speech translation systems. End-to-end speech translation has shown its effectiveness for multiple languages and in multiple scenarios. It becomes now a wellestablished task in IWSLT evaluation campaign were multiple shared taks are proposed to assess Spoken Language Translation (SLT) systems for many language pair in several settings. Despite the great interest being shown to the end-to-end approach for speech translation task, we were able to identify only one recent work by (Wang et al., 2020b) including Arabic-English language pair limited to a corpus of 6 hours. We are also aware of the IWSLT 2022 Dialectal Speech Translation1 task which, unlike this work, focuses on Tunisianto-English speech translation (Zanon Boito et al., 1https://iwslt.org/2022/dialect 2022; Yan et al., 2022; Yang et al., 2022). 3 Training Data Whatever the chosen architecture for Speech translation system (pipeline or end-to-end), it requires a large amount of manually annotated training data that might be hard to obtain for multiple language pairs. For the Arabic-English language pair, a large amount of training data for ASR and MT was manually annotated in the framework of the DARPA\u2019s Global Autonomous Language Exploitation (GALE) project (Cohen, 2007). This huge amount of work was done for the purpose of making foreign languages speech and text accessible to English-only speakers through the development of automatic speech recognition and machine translation systems. In this respect, Arabic broadcast news and conversation speech were collected from multiple sources, then annotated under the supervision of Linguistic Data Consortium (LDC). Audio corpora and their transcripts are separately released in three phases : GALE Phase 2, 3 and 4. In addition to the speech audio corpora and transcripts released to train Arabic ASR systems, LDC also made available multiple Arabic to English parallel corpora.The latter are intended to be used for training and evaluating Arabic to English MT systems. They have been developed by manually translating from a number of different sources including Arabic news-wire, discussion groups and broadcast news and conversation. Upon closer inspection of these parallel corpora, we have found that part of the broadcast news and conversation parallel corpora were built by translating the manual transcripts released for the ASR task. Following the discovery we dug deep in the GALE speech recognition and machine translation LDC related releases and, as illustrated in Figure 1, we parsed all GALE speech recognition and machine translation corpora in order to extract a 3-way parallel corpus consisting of Arabic audio along with their Arabic transcriptions and English translation. As shown in Figure 1, for each transcribed audio \ufb01le part of the GALE corpus, we extract only segments for which we were able to \ufb01nd an exact match between the manual transcription, from ASR training data, and the source side of parallel corpora. Table 1 shows \fthe amount of speech audio for which we were able to \ufb01nd the corresponding translation in the LDC MT related releases. We report, for each phase: 2, 3 and 4, the original size of the speech corpus in hours and the extracted subset for which the English translation had been found. GALE Phase Hours #Segments Phase 2 436h 11 min 190.510 Phase 2 ST. 59h 12 min 24.519 Phase 3 419h 03 min 195.143 Phase 3 ST. 28h 50 min 13.261 Phase 4 96h 18 min 54.787 Phase 4 ST. 4.0h 08 min 1.855 Phase 2/3/4 951h 32 min 440.440 Extracted ST. 92h 10 min 39.635 Table 1: Statistics of the original GALE Arabic to English Speech Transcription corpus and the extracted subsets for which translations are available. All the extracted segments were afterwards aligned using timestamps information from ASR transcript and translation from MT target side. As table 1 shows, an overall Arabic to English speech translation corpus of around 92 hours was extracted. This corpus was then cleaned out by removing all the back-channel and incomplete speech segments. The \ufb01nal corpus is then splitted into training, development and test sets. Development and test contain segments from randomly selected broadcast audio programs. Their size is respectively 1188 and 987 segments. Development set contain broadcast News and Conversation recordings from Abu Dhabi TV, Al Alam News Channel, based in Iran and Al Arabiya. Test set is made up of broadcast News and Conversation recordings from Abu Dhabi TV, Aljazeera, Al Arabiya and Syria TV. The remaining material was used as training data for ASR, MT and ST systems. Train dev. test Hours 83h54 3h05 2h38 Sentences 32.099 1188 987 #AR words 606.465 22.537 18.598 #EN words 945.801 35.180 27.880 Table 2: Statistics and splits of the extracted Arabic to English Speech Translation corpus extracted from LDC ASR and MT independent releases. Table 2 gives a detailed statistics of the extracted Arabic to English Speech Translation corpus, including speech duration as well as token counts for both transcripts and translations. 4 Experimental Setup All our experiments are built using open source toolkits with the following settings: ASR models were built using the End-to-End Speech Processing Toolkit ESPnet (Watanabe et al., 2018b). We trained an attention-based encoder-decoder architecture with an encoder of 4 VGGBLSTM layers including 1024 cells in each layer. The second and third bottom BLSTM layers of the encoder reduced the utterance length by a factor of two. We used a decoder of one 1024-dimensional BLSTM layer. For both ASR and ST speech utterance, we extracted 40 Mel\ufb01lterbank energy features with a step size of 10ms and a window size of 25ms. The extracted features, we applied mean and variance normalization. MT models were built using the FAIRSEQ package (Ott et al., 2019a). We trained end-to-end word and bpe-based translation systems using the \"lstm_luong_wmt_en_de\" model template. This template is a standard LSTM Encoder-Decoder architecture composed of 4 stacked BLSTM layers, each with 1000 cells, and 1000-dimensional embeddings (Luong et al., 2015). Translation tasks (AST and MT) evaluation was carried out using case-sensitive BLEU metric (Papineni et al., 2002). Scores are calculated using one human reference with Moses\u2019mteval-v14.pl script 2 applied to de-tokenized and punctuated translation output. As for ASR, systems were evaluated using Word Error Rate (WER). 5 Pipeline Speech Translation In this section, we evaluate the pipeline approach for speech translation in two different scenarios, plausible for many language pairs, depending on the amount and the type of training data used for the development of the Speech Translation task. 1. Constrained Scenario : Under this scenario we have access to a 3-way limited training data. This data includes speech audio \ufb01les in source language their transcriptions in the source language and translations to the target language. 2https://github.com/moses-smt/mosesdecoder/ blob/master/scripts/generic \fFigure 1: Extraction Arabic to English speech translation corpus from LDC ASR and MT independent releases. 2. Unconstrained Scenario : In addition to resources from the constrained scenario, we have access to a large ASR and MT-speci\ufb01c resources. As to the \ufb01rst scenario of the pipeline approach we only used the 3-way parallel data reported in table 2. In this instance, an end-to-end ASR module was trained using ESPnet (Watanabe et al., 2018a) toolkit on the speech audio \ufb01les from Table 2 and their corresponding transcripts. In the Unconstrained Scenario, however, ASR module was trained using the totality of the GALE Phase 2, 3 and 4 ASR data reported in Table 1. ASR System dev test ASR_Const 20.90 21.90 ASR_UnConst 13.10 14.60 Table 3: ASR WER (in %) on the dev and test sets. Table 3 presents the results of ASR system under both constrained and unconstrained scenarios. As shown in the Table 3, using a training set of around 84h of manually transcribed broadcast news and conversation, we obtained a WER of 20.90% and 21.90% on dev and test sets, respectively. Not surprisingly, WER has been signi\ufb01cantly improved with the use of the complete GALE training data3 (row ASR_UnConst) to achieve 13.10% and 3We have taken particular care to remove dev and test data before using GALE corpora to train the ASR system. 14.60% on dev and test sets, respectively. As previously stated, within the pipeline ST framework, the output of the ASR module is automatically translated to the target language using the MT module. The MT module is also an end-to-end system trained using Fairseq toolkit (Ott et al., 2019b) under both constrained (MT_Const) and unconstrained (MT_UnCons) scenarios. Table 4 reports the BLEU scores of the translation output by varying ASR module condition while \ufb01xing MT module constrained to speech translation data composed of the transcripts along with their corresponding English translation from Table 2. Pipeline ST System dev test MT_Const_ASR_Const 19.03 15.96 MT_Const_ASR_UnConst 20.69 16.58 MT_Const_ref_Transc 22.31 18.30 Table 4: Case-sensitive tokenized and single-reference BLEU scores (in %) of the pipeline speech translation system with the constrained MT module. The \ufb01rst row in table 4 (MT_Const_ASR_Const) gives the BLEU score when the MT constrain module translates the output of a constrained ASR system (row ASR_Const from Table 3). In this case, a BLEU score of 19.03% and 15.96% is respectively achieved on dev and test sets. The second row in the same table (MT_Const_ASR_UnConst) shows the BLEU \fscore when the ASR module is under the unconstrained condition, i.e. output from the system ASR_UnConst in Table 3 are used as input to the MT system. As expected, when it comes to translating a higher transcription quality, the translation quality is better and the BLEU score is increased by 1.66 and 0.62 BLEU points on dev and test sets, respectively. The last row of table 4 (MT_Const_ref_Transc) simulates the situation where we have access to a perfect transcripts in the source language. In this case, translation quality is further improved reaching 22.31 BLEU points on dev set and 18.30 points on test set. In a similar vein, table 5 presents results in settings where MT module is no longer constrained to speech translation data. Indeed, additional Arabic to English Bilingual text from GALE LDC releases are used to train the unconstrained MT module 4. This unconstrained MT module, was used to run several experiments using various input conditions similar to what we did within the constrained condition. The results of these experiments are presented in Table 5. The \ufb01rst row (MT_UnConst_ASR_Const) sets out the BLEU score when the unconstrained MT module translates the output of the constrained ASR (\ufb01rst row in table 3). Compared to using the constrained MT system, a considerable improvement of 12.84 (from 19.03 to 31.87) and 8.26 (from 15.96 to 24.22) BLEU points is achieved on dev and test sets, respectively. As we have seen above, translation quality is further improved when the input to the translation module is of a higher quality generated by the unconstrained ASR system (row MT_UnConst_ASR_UnConst). This allows to reach a dev and test BLEU scores of 36.48 and 25.80 respectively. As expected, the BLEU score is even better when it comes to translate the reference transcription (MT_UnConst_ref_Transc) as shown in the last row of Table 5. In the latter case, we achieved a dev set BLEU score of 39.51 and a test set BLEU score of 30.60. 6 End-to-End Speech Translation In this section, we present and evaluate the end-to-end approach for Arabic to English speech 4Unconstrained MT system was trained using all GALE Arabic-English Parallel Text from 2007 to 2016. Pipeline ST System dev test MT_UnConst_ASR_Const 31.87 24.22 MT_UnConst_ASR_UnConst 36.48 27.51 MT_UnConst_ref_Transc 39.51 30.60 Table 5: Case-sensitive tokenized and single-reference BLEU scores (in %) of the pipeline speech translation system with Unconstrained MT module. translation task. The End-to-End system is built using the ESPnet toolkit (Watanabe et al., 2018b). We used an attention-based encoder-decoder architecture. The encoder has two VGG-like CNN blocks followed by \ufb01ve stacked 1024-dimensional BLSTM layers. The decoder is composed of two 1024-dimensional LSTM layers. Each VGG block contains two 2D-convolution layers followed by a 2D-maxpooling layer whose aim is to reduce both time and frequency dimension of the input speech features by a factor of 2. All our experiments are conducted using characters as target tokens. Table 6 shows the performance of the end-to-end ST model with different training con\ufb01gurations. End2End ST system dev test Baseline (1) 2.58 2.23 (1) + Enc. init 12.44 9.57 (1) + Unsup ph234 23.23 18.97 (1) + Enc. Init + Unsup ph234 24.95 19.09 Table 6: Case-sensitive tokenized and single-reference BLEU score (in %) of the End-to-end AR\u2192EN Speech Translation system with Encoder initialization and data augmentation The \ufb01rst row from Table 6 shows the baseline results obtained when the end-to-end model is trained under the constrained scenario, that is when the training data is restricted to the 83h54 minutes from table 2. We can clearly see that the end-to-end model is not strong enough to compete with the cascaded model trained using the same amount of data. Indeed, the BLEU score of the end-to-end system on the dev set is 2.58, compared to the 19.03 points of the pipeline model. The same goes for test set where end-to-end system BLEU score is 2.23 compared to 15.96 which is obtained with cascade translation approach. From this initial baseline and with the aim of improving the end-to-end system translation \fquality, we employed the well established transfer learning technique (Bansal et al., 2018) commonly referred as encoder pre-training. Indeed, using the ASR encoder of the Unconstrained ASR system (row ASR_UnConst in 3) to initialize the parameters of the ST encoder greatly improves the performance of end-to-end ST networks. The results of the encoder pre-training are shown in the second row ( (1) + Enc. init) of table 6. As a result, we observed a strong effect re\ufb02ected by the substantial improvement in the BLEU score: +9.86 and +7.34 BLEU score for dev and test sets, respectively. Just like the transfer learning via encoder pretraining approach, data augmentation is proven to enhance end-to-end speech translation quality. It is carried out using synthetic data which is generated by automatically translating the transcripts of an ASR corpora in the source language. Herein, we used the unconstrained NMT system (MT_UnConst) of table 5 in order to translate the Arabic GALE transcripts provided in table 1. Incomplete and back-channel speech segments were \ufb01ltered out from the generated translations. All in all, we were able to create the synthetic corpus of 795 hours of Arabic to English speech translation corpus detailed in table 7. Hours #Sent. #AR #EN Gale Synth. 795 314.167 6.1M 9.1M Table 7: Statistics of the synthetic Ar-En ST corpus. These synthetic data are thereafter used as additional data to train the end-to-end ST system. The results of this data augmentation experiment are highlighted in Table 6 (row (1) + unsup ph234). As we can see from the obtained results synthetic training data boosts up the end-to-end ST system to achieve a BLEU scores of 23.23 points and 18.97 points for dev and test sets, respectively. Both encoder pre-training and data augmentation are shown to be helpful improving signi\ufb01cantly the ST baseline. We also experimented using both methods at the same time. The last row of the same table presents the results of the end-to-end speech translation trained with data augmentation using synthetic data from Table 7, and encoder hyperparameters initialization from the ASR_UnConst system presented in Table 3. By applying these two methods together, we were able to reach a BLEU scores of 24.95 and 19.09 points for dev and test sets, respectively. These end-to-end speech translation results are to be compared to pipeline results shown in row MT_UnConst_ASR_UnConst of table 5. 7 Discussion and analysis Despite the improvements brought by transfer learning and data augmentation technics, the best results are still obtained using cascade architecture. We believe that this performance gap can be partly explained by the fact that end-to-end system was trained using only a small amount (\u223c84 hours) of real speech translation corpus. Based on the results of previous works from (Liu et al., 2019), the end-to-end ST models are known as an effective means of circumventing the error-propagation problem faced by the conventional pipeline system. Indeed, every involved component in the traditional pipeline approach produces errors, which are propagated through the cascade and lead to compounding follow-up errors. In order to assess the ability of our end-to-end system to overcome this error-propagation pattern, we selected some translation examples where pipeline system fails due to this problem and we checked the translation output of the end-to-end system. Example from table 8 shows a translation error caused by the propagation of transcription errors occurred at the end of the segment (text in bold ASR output row). The end-to-end system, however, relies on the source speech signal and translates correctly the same part of the input. In addition to this error-propagation problem we have found that end-to-end system is sometimes penalized although its translation is correct. Table 9 presents and example where both systems output correct translation but BLEU score is better with pipeline system. The thing might happen for pipeline system as well, but we believe that end-to-end systems are more affected as the translation references are obtained by translated from a textual input, not from speech audio in the source language. This trend must be probed further in order to quantify its impact on the end-to-end ST system performance. We leave such investigations as future work. \f8" + } + ], + "Maxat Tezekbayev": [ + { + "url": "http://arxiv.org/abs/2111.06832v3", + "title": "Speeding Up Entmax", + "abstract": "Softmax is the de facto standard in modern neural networks for language\nprocessing when it comes to normalizing logits. However, by producing a dense\nprobability distribution each token in the vocabulary has a nonzero chance of\nbeing selected at each generation step, leading to a variety of reported\nproblems in text generation. $\\alpha$-entmax of Peters et al. (2019,\narXiv:1905.05702) solves this problem, but is considerably slower than softmax.\n In this paper, we propose an alternative to $\\alpha$-entmax, which keeps its\nvirtuous characteristics, but is as fast as optimized softmax and achieves on\npar or better performance in machine translation task.", + "authors": "Maxat Tezekbayev, Vassilina Nikoulina, Matthias Gall\u00e9, Zhenisbek Assylbekov", + "published": "2021-11-12", + "updated": "2022-05-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Sparseness of vector representations is a desirable trait in neural network models for natural language processing (NLP): words (subwords) are discrete objects by their nature, and, accordingly, are encoded by one-hot embeddings at the input and output of neural networks. However, to predict a categorical response in neural models, softmax is most often used, which produces a dense probability distribution, i.e. every category (word/subword) receives a non-zero probability. Recent studies suggest that it is this output density that poses problems when the trained NLP model is used for inference. For example, in the case of text generation, unconstrained sampling from a trained language model results in poor quality of the resulting text (Holtzman et al., 2020). In neural machine translation (NMT), exact decoding from a trained model often results in empty text (Stahlberg and Byrne, 2019).1 To get around these problems, constrained decoding techniques have been proposed, most of which arti\ufb01cially impose sparsity on softmax prediction. For example, Fan 1The authors called this phenomenon the cat got your tongue problem. et al. (2018) propose to sample from the top-k probable words, and Holtzman et al. (2020) propose to sample from the most probable words, which comprise the cumulative probability p. While these methods are effective, they are ad-hoc solutions that lead to a mismatch between how the model is trained and how it is used at inference. In this regard, the works on sparse alternatives to softmax stand apart since they allow us to make inference from the model in the same way than it was trained. Some of the most successful and elegant solutions are sparsemax (Martins and Astudillo, 2016) and its generalization \u03b1-entmax (Peters et al., 2019). When coupled with suitable losses, these transformations are not inferior to softmax, and sometimes even surpass it as measured with \ufb01nal performance metrics on a number of tasks. A problem with these transformations however is that they are signi\ufb01cantly slower than softmax when the number of categories (vocabulary size) is tens of thousands, as in the case of text generation. This is because \u03b1-entmax transformation\u2014in its original formulation\u2014requires sorting over the logits.2 In this work, we ask the question: is it possible to obtain a sparse output like that of \u03b1-entmax, but without its degradation in computational speed? Our answer is af\ufb01rmative\u2014we propose a sparse output transformation that \u2022 is on par or superior to softmax and \u03b1-entmax in the NMT tasks, \u2022 works as fast as softmax during training and at inference, \u2022 gives the same training dynamics as \u03b1-entmax (in training steps). The most surprising thing is that such a transformation is simply a shifted ReLU raised to power 1 \u03b1\u22121, which we call \u03b1-ReLU. 2We also compare against an approximate version which only performs sorting on the highest values of the logits. arXiv:2111.06832v3 [cs.CL] 19 May 2022 \fThe rest of the paper is organised as follows. In Sect. 2 we motivate the choice of \u03b1-ReLU as the output transformation, and also select an appropriate loss function. In Sect. 3 we experimentally con\ufb01rm our claims about performance and output speed of \u03b1-ReLU in the NMT task. Sect. 4 is devoted to a comparative analysis of \u03b1-ReLU and \u03b1-entmax in terms of sparsity, ability to solve the empty translation problem, and training dynamics. 2 \u03b1-ReLU at Output Our departure point is the \u03b1-entmax transformation of Peters et al. (2019) which can be de\ufb01ned for z \u2208Rd as \u03b1-entmaxi(z) = [(\u03b1 \u22121)zi \u2212\u03c4(z)] 1 \u03b1\u22121 + , (1) where [x]+ := max{x, 0}, and \u03c4 : Rd \u2192R is the (unique) function that satis\ufb01es P j[(\u03b1\u22121)zj \u2212 \u03c4(z)] 1 \u03b1\u22121 + = 1 for any z. It is this threshold \u03c4 that makes the computation of \u03b1-entmax slow, because one needs to sort the components of z to \ufb01nd \u03c4 (Peters et al., 2019, Alg. 2). As we can see, the threshold \u03c4 is only needed to ensure that \u03b1-entmax(z) is a probability distribution. We loosen this constraint, and only require non-negative weights, which is suf\ufb01cient for most uses. Consider then a transformation \u03b1-ReLUi(z) := [(\u03b1 \u22121)zi \u2212\u03c4] 1 \u03b1\u22121 + , (2) where \u03c4 is a constant that does not depend on z. In order to force \u03b1-ReLU(z)\u2014applied to the logits z\u2014to converge to the one-hot vector ey of the gold label y we need to adjust the corresponding loss. This can easily be done by feeding the logits z and the output \u03b1-ReLU(z) into the following loss, which we call \u03b1-ReLU loss. \u2113(z, y) = (\u03b1-ReLU(z) \u2212ey)\u22a4\u0010 z \u2212 \u03c4 \u03b1\u221211 \u0011 + H\u03b1[\u03b1-ReLU(z)], (3) where H\u03b1[p] := 1 \u03b1(\u03b1\u22121) \u0010 1 \u2212P j p\u03b1 j \u0011 , \u03b1 \u0338= 1, is the Tsallis \u03b1-entropy (Tsallis, 1988), and 1 := (1, . . . , 1) \u2208Rd is a vector of ones. The rationale for coupling \u03b1-ReLU with the loss (3) is the following Lemma 1. For any \u03c4 \u2208R, the gradient of the \u03b1-ReLU loss (3) is given by \u2207z\u2113(z, y) = \u03b1-ReLU(z) \u2212ey. Proof. The proof is in Appendix B.1. By Lemma 1, gradient-based minimization of \u2113 indeed forces \u03b1-ReLU(z) \u2192ey. Notice that this is similar to what happens when the softmax normalization is coupled with the cross-entropy loss or when \u03b1-entmax is coupled with the entmax loss. In both cases differentiating the loss with respect to logits gives p\u2212ey, where p is either softmax(z) or \u03b1-entmax(z) (Martins and Astudillo, 2016; Peters et al., 2019). Remark. Recall that \u03b1-entmax is a generalization of sparsemax. For example, 2-entmax is essentially sparsemax, and for \u03b1 \u2208(1, 2) we get a smoothed version of sparsemax. Similarly, \u03b1ReLU is a kind of generalization of ReLU. So, the standard ReLU is 2-ReLU (with \u03c4 = 0), and for \u03b1 \u2208(1, 2) we get a smoothed ReLU (see Fig. 1). Figure 1: The graph of \u03b1-ReLU(x) for several \u03b1 \u2208 (1, 2], with \u03c4 = 0. 2-ReLU is a standard ReLU(x) := [x]+. 3 Experiments In theory, nothing prevents \u03b1-ReLU from learning what \u03b1-entmax is learning. However, in practice we can have a different picture, because training is conditioned by many factors\u2014the size of the dataset, the architecture of the neural network, the optimization algorithm, etc. In this section, we compare \u03b1-ReLU empirically with \u03b1-entmax (as well as with sparsemax and softmax), assuming all other factors are \ufb01xed. The goal of these experiments is to evaluate the consequences of using \u03b1-ReLU as drop-in replacement for \u03b1-entmax. We test \u03b1-ReLU at output in a neural machine translation task (Sutskever et al., 2014), which is essentially a conditional text generation task. Compared to open-ended text generation, there is a \fOutput Transform Loss IWSLT De\u2192En WMT En\u2192De WMT En\u2192Ru softmax cross-entropy 35.3 28.7 22.4 sparsemax sparsemax loss 35.5 26.6 19.6 1.5-entmax 1.5-entmax loss 36.6 28.6 23.9 1.5-entmax (k = 100) 1.5-entmax loss 36.7 28.4 23.7 1.5-ReLU 1.5-ReLU loss 37.3 28.6 24.6 # Trainable parameters 47M 75M 75M Table 1: NMT results: comparison of softmax, sparsemax, 1.5-Entmax and the proposed 1.5-ReLU as the output transformations in the Transformer NMT model. Reported is detokenized test BLEU. clearer metric of the quality of the generated text\u2014 the BLEU score (Papineni et al., 2002). As in open-ended text generation, at each prediction step, the NMT system needs to make a choice from all words (subwords) of the vocabulary, the size of which can reach several tens of thousands. Therefore, the sparsity of the output distribution becomes critical in such setups, since it can explicitly prevent the occurrence of most of the words that are inappropriate in the context. 3.1 Setup Data. We conduct experiments on three datasets of varied sizes: \u2022 IWSLT\u201914 De\u2192En (Cettolo et al.), 172K training examples, \u2022 WMT\u201914 En\u2192De (Bojar et al., 2014), 4.5M training examples, \u2022 WMT\u201913 En\u2192Ru (Bojar et al., 2013), 1.3M tranining examples.3 We preprocess all datasets using the byte pair encoding algorithm (Sennrich et al., 2016) with 10K merge operations on IWSLT, 40K merge operations on WMT En\u2192De, and 60K merge operations on WMT En\u2192Ru. We report detokenized casesensitive BLEU with SacreBLEU (Post, 2018).4 Hyperparameters \u03b1 and \u03c4. In all experiments we set \u03b1 = 1.5, because this value was recommended by Peters et al. (2019); Peters and Martins (2021) as the middle ground between \u03b1 = 1 (softmax) and \u03b1 = 2 (sparsemax). The value for \u03c4 is chosen as follows: we run the \ufb01rst batch through a non-trained neural network, 3We did not use the Yandex 1M Parallel Corpus because of its license restrictions. 4BLEU+case.mixed+lang.ende+numrefs.1+smooth.exp+tok.13a+version.1.5.1 which has 1.5-entmax at the output, in the forward direction and determine the average \u03c4 value across the batch. This value is then used to train the 1.5ReLU network. Our preliminary experiments have shown that 1.5-ReLU convergence is sensitive to the \u03c4 value, and that having output close to the probability distribution early in the learning phase works well with the rest of hyperparameters which are set to their default values. Training. We trained the Transformer Base (Vaswani et al., 2017) using the OpenNMT-py 2.0 toolkit (Klein et al., 2017). Optimization details are in Appendix A. 3.2 Results The results are given in Table 1. Reported are test BLEU scores for best checkpoints which are selected based on validation BLEU. We observe that the 1.5-ReLU performs on par with 1.5-entmax or better, while sparsemax is inferior to all others. Training Time. Fig. 2&3 show the training dynamics in training steps and in wall time on WMT\u201914 En\u2192De. Despite the closeness of performance in intermediate steps and at the end of training, we see that on the larger datasets 1.5-entmax is slower in wall time than softmax and 1.5-ReLU. To speed up the learning process, Peters et al. (2019) recommended limiting the number of sorted logits in the \u03b1-entmax to the k largest logits. We tried this using k = 100, which is the default value in the author\u2019s implementation of \u03b1-entmax.5 The resulting training dynamics are shown as dashed curves in Fig. 2&3. As we can see, partial sorting indeed speeds up the learning process, and at the same time does not harm the quality of the translation compared to \u03b1-entmax with full sorting. But in the end, learning is still slower than in the case 5https://github.com/deep-spin/entmax \fFigure 2: Training dynamics in training steps. Figure 3: Training dynamics in absolute time. 1.5-entmax (k=100) is a variant of 1.5-entmax in which sorting is performed only for the largest k = 100 logits. Figure 4: Normalized inference for WMT En\u2192Ru with different beam sizes. of 1.5-ReLU. Of course, one can try to select such k that the speed of calculating the 1.5-entmax will be as close as possible to the speed of 1.5-ReLU without losing quality, but this requires additional efforts on the part of the user, and this must be done for each case separately. Also note that both 1.5-entmaxes (with full and partial sorting) cannot learn the English-Russian data set as well as 1.5-ReLU. In this regard, 1.5-ReLU does not require additional \ufb01ne-tuning, converges as fast as softmax in absolute time and performs on par or better. Thus 1.5-ReLU combines all three desired properties: computation speed, task performance, and sparsity of output. Inference Time. We measured inference time of translating the WMT En\u2192Ru test data with the different strategies and with different beam sizes. The results\u2014normalized by the smallest value\u2014 are shown in Fig. 4. As can be seen the relative difference seems independent of the beam size: softmax is almost twice faster than 1.5-entmax (with full sorting over the logits). Even though the softmax version is optimized through the softmax CUDA kernel, it performs equivalent to the 1.5-ReLU model in terms of computation speed. 4 Analysis 4.1 Empty Translations We remind the reader that the cat got your tongue problem (Stahlberg and Byrne, 2019) is one of the main motivations for using sparse transformations when generating text. As Peters and Martins (2021) have shown, 1.5-entmax successfully tackles this problem by signi\ufb01cantly lowering the proportion of cases where an empty string is more likely than the beam search hypothesis. For 1.5-ReLU, we also calculated this proportion, and compared it with the proportions for softmax and sparsemax (Table 2). As we see, 1.5-ReLU also successfully tackles the cat got your tongue problem. \fFigure 5: Sparsity as proportion of zero components after applying 1.5-ReLU and 1.5-entmax, test sets. Figure 6: Sparsity on training set. Output IWSLT WMT WMT Transform De\u2192En En\u2192De En\u2192Ru softmax 7.5% 29.8% 31.7% sparsemax 0% 0.03% 0% 1.5-entmax 0% 0.2% 0% 1.5-ReLU 0% 0.3% 0.1% Table 2: Percentage of development set examples for which the model assigns higher probability to the empty string than to the beam-decoded hypothesis. 4.2 Sparsity To compare the sparsity of 1.5-ReLU and 1.5entmax we depict in Fig. 5 the distributions of the number of zero components after applying these transformations (recall that for softmax all components are always nonzero). Since we constructed the \u03b1-ReLU in such way that it mimics the \u03b1entmax (at least in the early stages of training), we expected that these two transformations would have similar properties, including sparsity. However, this is not the case: as we can see, the 1.5-ReLU is signi\ufb01cantly less sparse than the 1.5-entmax. It is noteworthy that lower sparsity in this case correlates with a better performance in the translation task (see Table 1). A possible explanation for the difference in sparsity levels could be that \u03b1-ReLU, in contrast to \u03b1-entmax, behaves signi\ufb01cantly differently on the test set than on the training set. However, this is not the case: for example, comparing the sparsity on the IWSLT training set (Fig. 6), we see that the distributions of non-zero components are almost the same as on the test set for 1.5-ReLU and 1.5-entmax. Note that the sparsity of \u03b1-ReLU and \u03b1-entmax is approximately the same at the beginning of training due to how we initialize \u03c4 in 1.5-ReLU (making it as close as possible to 1.5-entmax\u2019s \u03c4 in the untrained model, Sec. 3.1). However, during training, \u03b1-ReLU\u2019s \u03c4 remains \ufb01xed, and the model can only adapt the logits themselves so that \u03b1-ReLU(z) converges to the corresponding one-hot vector. At the same time, in \u03b1-entmax, \u03c4(z) adapts together with logits z. We hypothesize that during training, the entmax\u2019s \u03c4(z) gradually increases which entails greater sparsity by the end of the training. However, the logits themselves also change during training, so the increase in \u03c4 may not be the cause of greater sparsity. To \ufb01nd out, we track the dynamics of mean logit norm \u2225z\u2225and mean \u03c4 during training for both 1.5-entmax and 1.5-ReLU (Fig. 7). As we can see, the logit sizes grow in both cases. Figure 7: Evolution of the mean \u03c4(z) and \u2225z\u2225during training for 1.5-entmax and 1.5-ReLU models on IWSLT\u201914 En\u2192De. At the same time, the 1.5-entmax\u2019s \u03c4(z) increases following the logit size, while the 1.5-ReLU\u2019s \u03c4 remains constant. From this we conclude that the sparsity of 1.5-entmax is inevitably less than the sparsity of 1.5-ReLU. \f4.3 Impact of \u03c4 The selection of \u03c4 was described in Section 3.1. However, the question arises: does the described approach lead to the choice of the optimal \u03c4? To \ufb01nd out, we trained the \u03b1-ReLU models for \u03c4 \u2208 {0, 0.1, 0.2, ..., 0.9, 1, 2, 5, 10} on the IWSLT data. Note that all of these \u03c4\u2019s have led to almost the same result at the end of the training (as predicted by Lemma 1). In Fig. 8, we present the dynamics of early training only for \u03c4 \u2208{0, 0.1, 0.2, 0.3, 5, 10}, since the curves for \u03c4 \u2208{0.4, ..., 0.9, 1, 2} practically coincided with the optimal curve corresponding to \u03c4 = 0.3. Note that our \u03c4 selection method Figure 8: Impact of \u03c4 on training dynamics, IWSLT\u201914 En\u2192De. gave a value of 0.33, thus we have no evidence against the adequacy of our method. 4.4 Estimation of \u03c4 without data On closer inspection, we noticed that the preentmax logits in the untrained Transformer model are distributed according to the normal law, regardless of what data is supplied to the input, ShapiroWilk test, p-value > 0.15. This allows us, using asymptotic theory, to estimate \u03c4 as \u02c6 \u03c4 = s dmodel 2(dmodel + dvocab) \u00b7 \u03a6\u22121(1 \u2212p\u2217), (4) where dmodel is the size of hidden representations, dvocab is the vocabulary size for a target language, \u03a6\u22121(\u00b7) is the probit function and p\u2217is the solution of a non-linear equation that involves functions related to the standard normal distribution (see Appendix B.2 for details). Table 3 compares the \u02dc \u03c4 calculated by running data through an untrained model with the estimate \u02c6 \u03c4 obtained from (4). As we can see, \u02c6 \u03c4 practically coincides with \u02dc \u03c4 with an IWSLT\u201914 WMT\u201914 WMT\u201913 De\u2192En En\u2192De En\u2192Ru dmodel 512 512 512 dvocab 10,000 40,000 60,000 p\u2217 .0184 .0171 .0169 \u02dc \u03c4 .33 .17 .14 \u02c6 \u03c4 .33 .17 .14 Table 3: Estimating threshold of 1.5-entmax: \u02dc \u03c4 is a value obtained by running a data through an untrained model; \u02c6 \u03c4 is an estimate based on asymptotic theory, i.e. without running the data through the model. accuracy of two decimal places. Unfortunately, the formula (4) is not universal: it is only true for the Transformer architecture. 4.5 Self-normalization The attentive reader may have noticed that the output of \u03b1-ReLU is not normalized, i.e. the components of \u03b1-ReLU(z) do not have to sum up to 1. Accordingly, the question arises: how correct is it to compare translation scores at different steps of the beam-search decoding if the conditional probabilities are not normalized? However, the comparison is possible if the \u03b1-ReLU(z) components add up to approximately the same number, i.e. if the model is self-normalizing. To check this, we ran the trained \u03b1-ReLU model on the IWSLT and WMT\u201914 test sets, and looked at the distribution of P i \u03b1-ReLUi(z) at each decoding step. The results are shown in Fig. 9. As we can see, the sum of the Figure 9: Distribution of the sum of \u03b1-ReLU(z) components across the IWSLT\u201914 and WMT\u201914 test sets: \u03b1-ReLU self-normalizes. \u03b1-ReLU(z) components concentrates well around its mean \u22481.24 (IWSLT) and 1.09 (WMT\u201914), which might indicate that the model indeed has a self-normalization property. 4.6 Training Dynamics As we noted in Sect. 3.2, the training dynamics are similar in all three cases (softmax, 1.5\fentmax, 1.5-ReLU) when time is measured in training steps. Here we attempt to explain this phenomenon through the recently proposed Neural Tangent Kernel (NTK) approach of Jacot et al. (2018). Roughly speaking, the NTK theory suggests that a suf\ufb01ciently wide neural network trains like a kernel regression. We use this theory to show (in Appendix B.3) that in all three cases the logits z(x, t) for a training instance x at a training step t evolve (approximately) according to the same differential equation dz dt = \u2212E(x\u2032,y\u2032)[K\u03c3(x, x\u2032) \u00b7 (\u03c3(z\u2032) \u2212ey\u2032)], (5) where expectation is over training examples (x\u2032, y\u2032), \u03c3(\u00b7) is one of the transformations considered (softmax, \u03b1-entmax, or \u03b1-ReLU), and K\u03c3(x, x\u2032) \u2208Rd\u00d7d is a positive semi-de\ufb01nite matrix that depends on \u03c3. The Equation (5) is a non-linear matrix differential equation which in general cannot be solved analytically. However, it has an equilibrium point z(x, t) such that E(x\u2032,y\u2032)[K\u03c3(x, x\u2032) \u00b7 (\u03c3(z\u2032) \u2212ey\u2032)] = 0, thus its solution converges to this point as t \u2192\u221e. This similarity in the evolution of \u03c3(z) implies the similarity in the evolution of the perfomance metric\u2014such as BLEU\u2014across all three transformations. 4.7 Human Evaluation Although the BLEU metric (Papineni et al., 2002) has stood the test of time, it is still an automated assessment of translation quality. To double-check the reliability of the results from Table 1, we decided to manually evaluate the translations from the WMT\u201913 En\u2192Ru test split. To do this, we followed the human evaluation setup from (Berard et al., 2019). We formed two random samples of 135 instances each and gave them to two annotators. 45 instances were shared across two samples in order to calculate inter-annotator agreement. Each instance consists of an original sentence in English and 4 candidate translations into Russian (reference, softmax, entmax, \u03b1-ReLU). The annotators were to rate each translation on a 4-point scale. For annotation instructions, see Appendix C. The order of candidate translations was shuf\ufb02ed for each instance, so the annotators did not know which sentence is from which model. Nevertheless, the annotator always had a good chance of guessing which translation was the reference one, due to the large difference in quality between human and machine translation. Model Avg. Score Std. Dev. Reference 3.9 0.30 Softmax 3.3 0.75 1.5-entmax 3.2 0.74 1.5-ReLU 3.3 0.74 Table 4: Results of Human Evaluation across 270 random examples (with repetitions) from WMT\u201913 En\u2192Ru test split. Scores are on a 4-point scale. The results of human evaluation are shown in Table 4. Cohen\u2019s \u03ba = 0.56, indicating moderate agreement between annotators. As we can see, all three models give approximately the same translation quality, and all three are signi\ufb01cantly inferior to the reference translation. This is generally consistent with the results of 1.5-ReLU and 1.5-entmax in Table 1, but at the same time casts doubt on the softmax lag behind 1.5-ReLU and 1.5-entmax as the BLEU metric suggests. In Appendix D we give a few examples where 1.5-ReLU translates better than 1.5-entmax and vice versa. 5 Related Work Sparse seq2seq models. Our proposed \u03b1-ReLU transformation is based on the \u03b1-entmax transformation of Peters et al. (2019), which in turn is a generalization of the sparsemax transformation (Martins and Astudillo, 2016). In our work, we study sparseness at the output of a neural network. Nevertheless, there are a number of works aimed at sparsi\ufb01cation within a neural network. For example, Malaviya et al. (2018); Peters et al. (2019); Correia et al. (2019) show that sparsemax and \u03b1entmax can replace softmax in the attention mechanism with some success. A recent work of Zhang et al. (2021) attempted to replace softmax with a component-wise ReLU in the attention mechanism. Unfortunately, in its pure form, this replacement leads to the inability of the model to learn at all, since its loss function does not decrease during optimization. The authors solve this problem by adding a normalizing layer on top of the attention layer. These and other works (Zhang et al., 2019) state that sparsity in the weights of attention produces more interpretable patterns. However, Meister et al. (2021) questioned this claim and were unable to \ufb01nd clear evidence to support it. Therefore, in this \fwork, we focused on the application of \u03b1-ReLU to the output of the transformer model, and not to the mechanism of attention, but at the same time we do not deny the possibility of studying the latter. Self-normalization. Self-normalizing training aims to bypass the need of normalization during inference time. This is done by tweaking the learning mechanism so that the sum of all predictions sums (approximately) to a constant value. Theoretical work on why this works is poorly understood (Andreas et al., 2015) but early work in neural machine translation has shown its empirical value. Vaswani et al. (2013) achieves that by using noisecontrastive estimation (the neural model is used to re-rank the output of a hierarchical phrase-based machine translation system). Noise-contrastive estimation is also the standard training mechanism for word2vec (more popular than the alternative hierarchical softmax), which also eschews any expensive normalization. Differently, Devlin et al. (2014) changes the training loss to include a factor that encourages the normalizing factor to be 1. At inference time, this is just assumed and decoding time is reported to achieve a 15x speed-up. 6 Limitations and Risks We believe that the main limitations of our work are as follows: \u2022 \u03b1-ReLU\u2019s output is still not a probability distribution, as required by the classical formulation of a probabilistic classi\ufb01cation model. \u2022 \u03c4 evaluation requires either running the data through an untrained model with \u03b1-entmax at the output, or deriving a formula similar to (4) for each individual architecture. \u2022 Our approach only works for the case when \u03b1ReLU is used at the output of the model, but it is not clear how to use it as an alternative to softmax/\u03b1-entmax in the attention layer. The last mentioned limitation leads to the potential risk of inability to learn if \u03b1-ReLU is misused in the intermediate layers of the neural network such as attention layers. The experiments of Zhang et al. (2021) using vanilla ReLU (2-ReLU with \u03c4 = 0 in our notation) instead of softmax to produce attention weights lead to a divergence of the loss function of the Transformer model. This translates into a waste of energy, especially when training large models on large datasets. Therefore, we believe that in the future, a preliminary mathematical analysis and/or experiments with small models on small datasets should be carried out as to why the unnormalized distribution of attention weights leads to the inability of the model to learn. 7" + }, + { + "url": "http://arxiv.org/abs/1912.13413v1", + "title": "Semantics- and Syntax-related Subvectors in the Skip-gram Embeddings", + "abstract": "We show that the skip-gram embedding of any word can be decomposed into two\nsubvectors which roughly correspond to semantic and syntactic roles of the\nword.", + "authors": "Maxat Tezekbayev, Zhenisbek Assylbekov, Rustem Takhanov", + "published": "2019-12-23", + "updated": "2019-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Assuming that words have already been converted into indices, let {1, . . . , n} be a \ufb01nite vocabulary of words. Following the setups of the widely used WORD2VEC (Mikolov et al. 2013) model, we consider two vectors per each word i: \u2022 wi is an embedding of the word i when i is a center word, \u2022 ci is an embedding of the word i when i is a context word. We follow the assumptions of Assylbekov and Takhanov (2019) on the nature of word vectors, context vectors, and text generation, i.e. 1. A priori word vectors w1, . . . , wn \u2208 Rd are i.i.d. draws from isotropic multivariate Gaussian distribution: wi iid \u223cN \u00000, 1 dI \u0001 , where I is the d \u00d7 d identity matrix. 2. Context vectors c1, . . . , cn are related to word vectors according to ci = Qwi, i = 1, . . . , n, for some orthogonal matrix Q \u2208Rd\u00d7d. 3. Given a word j, the probability of any word i being in its context is given by p(i | j) \u221dpi \u00b7 ew\u22a4 j ci (1) where pi = p(i) is the unigram probability for the word i. Hypothesis. Under the assumptions 1\u20133 above, Assylbekov and Takhanov (2019) showed that each word\u2019s vector wi splits into two approximately equally-sized subvectors xi and yi, and the model (1) for generating a word i in the context of a word j can be rewritten as p(i | j) \u2248pi \u00b7 ex\u22a4 j xi\u2212y\u22a4 j yi. Interestingly, embeddings of the \ufb01rst type (xi and xj) are responsible for pulling the word i into the context of the word Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. dog dog barking puppy barking Figure 1: xand y-embeddings j, while embeddings of the second type (yi and yj) are responsible for pushing the word i away from the context of the word j. We hypothesize that the x-embeddings are more related to semantics, whereas the y-embeddings are more related to syntax. In what follows we provide a motivating example for this hypothesis and then empirically validate it through controlled experiments. Motivating Example Consider a phrase the dog barking at strangers The word \u2018barking\u2019 appears in the context of the word \u2018dog\u2019 but the word vector wbarking is not the closest to the word vector wdog (see Table 2). Instead, these vectors are split w\u22a4 dog = [x\u22a4 dog; y\u22a4 dog] w\u22a4 barking = [x\u22a4 barking; y\u22a4 barking] in such way that the quantity x\u22a4 dogxbarking \u2212y\u22a4 dogybarking is large enough. We can interpret this as follows: the word \u2018barking\u2019 is semantically close enough to the word \u2018dog\u2019 but is not the closest one: e.g. wpuppy is much closer to wdog than wbarking; on the other hand the word \u2018barking\u2019 syntactically \ufb01ts better being next to the word \u2018dog\u2019 than \u2018puppy\u2019, i.e. \u2212y\u22a4 dogypuppy < \u2212y\u22a4 dogybarking. This combination of semantic arXiv:1912.13413v1 [cs.CL] 23 Dec 2019 \fData Embeddings Size Finkelstein et al. Bruni et al. Radinsky et al. Luong, Socher, and Manning Google MSR WordSim MEN M. Turk Rare Words text8 w := [x; y] 200 .646 .650 .636 .063 .305 .319 Only x 100 .703 .693 .673 .149 .348 .213 Only y 100 .310 .102 .193 .019 .032 .128 enwik9 w := [x; y] 200 .664 .697 .616 .216 .518 .423 Only x 100 .714 .729 .652 .256 .545 .303 Only y 100 .320 .188 .196 .091 .096 .251 Table 1: Evaluation of word vectors and subvectors on the analogy tasks (Google and MSR) and on the similarity tasks (the rest). For word similarities evaluation metric is the Spearman\u2019s correlation with the human ratings, while for word analogies it is the percentage of correct answers. Model sizes are in number of trainable parameters. word i w\u22a4 dogci w\u22a4 dogwi x\u22a4 dogxi \u2212y\u22a4 dogyi puppy \u22120.204 13.331 6.564 \u22126.768 barking \u22120.263 10.343 5.040 \u22125.303 Table 2: Dot products between vectors. proximity (x\u22a4 dogxbarking) and syntactic \ufb01t (\u2212y\u22a4 dogybarking) allows the word \u2018barking\u2019 to appear in the context of the word \u2018dog\u2019. Experiments In this section we empirically verify our hypothesis. We train SGNS with tied weights (Assylbekov and Takhanov 2019) on two widely-used datasets, text8 and enwik9,1 which gives us word embeddings as well as their partitions: w\u22a4 i := [x\u22a4 i ; y\u22a4 i ]. The source code that reproduces our experiments is available at https://github.com/MaxatTezekbayev/Semantics--andSyntax-related-Subvectors-in-the-Skip-gram-Embeddings. x-Subvectors Are Related to Semantics We evaluate the whole vectors wi\u2019s, as well as the subvectors xi\u2019s and yi\u2019s on standard semantic tasks \u2014 word similarity and word analogy. We used the HYPERWORDS tool of Levy, Goldberg, and Dagan (2015) and we refer the reader to their paper for the methodology of evaluation. The results of evaluation are provided in Table 1. As one can see, the xsubvectors outperform the whole w-vectors in the similarity tasks and show competitive performance in the analogy tasks. However, the y-parts demonstrate poor performance in these tasks. This shows that the x-subvectors carry more semantic information than the y-subvectors. y-Subvectors Are Related to Syntax We train a softmax regression by feeding in the embedding of a current word to predict the part-of-speech (POS) tag of the next word: [ POS[t + 1] = softmax(Aw[t] + b) 1http://mattmahoney.net/dc/textdata.html. The enwik9 data was processed with the Perl-script WIKIFIL.PL provided on the same webpage. We evaluate the whole vectors and the subvectors on tagging the Brown corpus with the Universal POS tags. The resulting accuracies are provided in Table 3. We can see that Embeddings Size Trained on Trained on text8 enwik9 w := [x; y] 200 .445 .453 Only x 100 .381 .384 Only y 100 .426 .451 Table 3: Accuracies on a simpli\ufb01ed POS-tagging task. the y-subvectors are more suitable for POS-tagging than the x-subvectors, which means than the y-parts carry more syntactic information than the x-parts." + } + ], + "Zhenisbek Assylbekov": [ + { + "url": "http://arxiv.org/abs/1802.08375v2", + "title": "Reusing Weights in Subword-aware Neural Language Models", + "abstract": "We propose several ways of reusing subword embeddings and other weights in\nsubword-aware neural language models. The proposed techniques do not benefit a\ncompetitive character-aware model, but some of them improve the performance of\nsyllable- and morpheme-aware models while showing significant reductions in\nmodel sizes. We discover a simple hands-on principle: in a multi-layer input\nembedding model, layers should be tied consecutively bottom-up if reused at\noutput. Our best morpheme-aware model with properly reused weights beats the\ncompetitive word-level model by a large margin across multiple languages and\nhas 20%-87% fewer parameters.", + "authors": "Zhenisbek Assylbekov, Rustem Takhanov", + "published": "2018-02-23", + "updated": "2018-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.NE", + "stat.ML", + "68T50", + "I.2.7" + ], + "main_content": "Introduction A statistical language model (LM) is a model which assigns a probability to a sequence of words. It is used in speech recognition, machine translation, part-of-speech tagging, information retrieval and other applications. Data sparsity is a major problem in building traditional n-gram language models, which assume that the probability of a word only depends on the previous n words. To deal with potentially severe problems when confronted with any n-grams that have not explicitly been seen before, some form of smoothing is necessary. Recent progress in statistical language modeling is connected with neural language models (NLM), which tackle the data sparsity problem by representing words as vectors. Typically this is done twice: at input (to embed the current word of a sequence into a vector space) and at output (to embed candidates for the next word of a sequence). Especially successful are the models in which the architecture of the neural network between input and output is recurrent (Mikolov et al., 2010), which we refer to as recurrent neural network language models (RNNLM). Tying input and output word embeddings in word-level RNNLM is a regularization technique, which was introduced earlier (Bengio et al., 2001; Mnih and Hinton, 2007) but has been widely used relatively recently, and there is empirical evidence (Press and Wolf, 2017) as well as theoretical justi\ufb01cation (Inan et al., 2017) that such a simple trick improves language modeling quality while decreasing the total number of trainable parameters almost two-fold, since most of the parameters are due to embedding matrices. Unfortunately, this regularization technique is not directly applicable to subword-aware neural language models as they receive subwords at input and return words at output. This raises the following questions: Is it possible to reuse embeddings and other parameters in subword-aware neural language models? Would it bene\ufb01t language modeling quality? We experimented with different subword units, embedding models, and ways of reusing parameters, and our answer to both questions is as follows: There are several ways to reuse weights in subword-aware neural language models, and none of them improve a competitive character-aware model, but some of them do bene\ufb01t syllableand morphemeaware models, while giving signi\ufb01cant reductions in model sizes. A simple morpheme-aware model that sums morpheme embeddings of a word bene\ufb01ts most from appropriate weight tying, showing a signi\ufb01cant gain over the competitive word-level baseline across different languages and data set sizes. Another contribution of this paper is the discovery of a hands-on principle that in a multi-layer input embedding model, layers should be tied consecutively bottom-up if reused at output. The source code for the morpheme-aware model is available at https://github.com/ zh3nis/morph-sum. arXiv:1802.08375v2 [cs.CL] 25 Apr 2018 \f2 Related Work Subword-aware NLM: There has been a large number of publications in the last 2\u20133 years on subword-level and subword-aware NLMs,1 especially for the cases when subwords are characters (Ling et al., 2015; Kim et al., 2016; Verwimp et al., 2017) or morphemes (Botha and Blunsom, 2014; Qiu et al., 2014; Cotterell and Sch\u00a8 utze, 2015). Less work has been done on syllable-level or syllable-aware NLMs (Mikolov et al., 2012; Assylbekov et al., 2017; Yu et al., 2017). For a thorough and up-to-date review of the previous work on subword-aware neural language modeling we refer the reader to the paper by Vania and Lopez (2017), where the authors systematically compare different subword units (characters, character trigrams, BPE, morphs/morphemes) and different representation models (CNN, Bi-LSTM, summation) on languages with various morphological typology. Tying weights in NLM: Reusing embeddings in word-level neural language models is a technique which was used earlier (Bengio et al., 2001; Mnih and Hinton, 2007) and studied in more details recently (Inan et al., 2017; Press and Wolf, 2017). However, not much work has been done on reusing parameters in subword-aware or subwordlevel language models. Jozefowicz et al. (2016) reused the CharCNN architecture of Kim et al. (2016) to dynamically generate softmax word embeddings without sharing parameters with an input word-embedding sub-network. They managed to signi\ufb01cantly reduce the total number of parameters for large models trained on a huge dataset in English (1B tokens) with a large vocabulary (800K tokens) at the expense of deteriorated performance. Labeau and Allauzen (2017) used similar approach to augment the output word representations with subword-based embeddings. They experimented with characters and morphological decompositions, and tried different compositional models (CNN, Bi-LSTM, concatenation) on Czech dataset consisting of 4.7M tokens. They were not tying weights between input and output representations, since their preliminary experiments with tied weights gave worse results. Our approach differs in the following aspects: 1Subword-level LMs rely on subword-level inputs and make predictions at the level of subwords; subword-aware LMs also rely on subword-level inputs but make predictions at the level of words. we focus on the ways to reuse weights at output, seek both model size reduction and performance improvement in subword-aware language models, try different subword units (characters, syllables, and morphemes), and make evaluation on small (1M\u20132M tokens) and medium (17M\u201351M tokens) data sets across multiple languages. 3 Recurrent Neural Language Model Let W be a \ufb01nite vocabulary of words. We assume that words have already been converted into indices. Let Ein W \u2208R|W|\u00d7dW be an input embedding matrix for words \u2014 i.e., it is a matrix in which the wth row (denoted as w) corresponds to an embedding of the word w \u2208W. Based on word embeddings w1:k = w1, . . . , wk for a sequence of words w1:k, a typical word-level RNN language model produces a sequence of states h1:k according to ht = RNNCell(wt, ht\u22121), h0 = 0. (1) The last state hk is assumed to contain information on the whole sequence w1:k and is further used for predicting the next word wk+1 of a sequence according to the probability distribution Pr(wk+1|w1:k) = softmax(hkEout W + b), (2) where Eout W \u2208RdLM\u00d7|W| is an output embedding matrix, b \u2208R|W| is a bias term, and dLM is a state size of the RNN. Subword-based word embeddings: One of the more recent advancements in neural language modeling has to do with segmenting words at input into subword units (such as characters, syllables, morphemes, etc.) and composing each word\u2019s embedding from the embeddings of its subwords. Formally, let S be a \ufb01nite vocabulary of subwords,2 and let Ein S \u2208R|S|\u00d7dS be an input embedding matrix for subwords. Any word w \u2208W is a sequence of its subwords (s1, s2, . . . , snw) = \u03c3(w), and hence can be represented as a sequence of the corresponding subword vectors: [s1, s2, . . . , snw]. (3) A subword-based word embedding model E(\u00b7; Ein S, \u0398in) with parameters (Ein S, \u0398in) constructs a word vector x from the sequence of subword vectors (3), i.e. x = E(\u03c3(w); Ein S, \u0398in), (4) 2As in the case of words, we assume that subwords have already been converted into indices. \funconstitutional conditions on subword-based softmax word-level RNNLM word vector subword-based embedding of a word subword vectors un con sti tu tional imposes unconstitutional conditions Figure 1: Subword-aware RNNLM with subwordbased softmax. which is then fed into a RNNLM (1) instead of a plain embedding w. The additional parameters \u0398in correspond to the way the embedding model constructs the word vector: for instance, in the CharCNN model of Kim et al. (2016), \u0398in are the weights of the convolutional and highway layers. Reusing word embeddings: Another recent technique in word-level neural language modeling is tying input and output word embeddings: Ein W = \u0000Eout W \u0001T , under the assumption that dW = dLM. Although being useful for word-level language modeling (Press and Wolf, 2017; Inan et al., 2017), this regularization technique is not directly applicable to subword-aware language models, as they receive subword embeddings at input and return word embeddings at output. In the next section we describe a simple technique to allow reusing subword embeddings Ein S as well as other parameters \u0398in in a subword-aware RNNLM. 4 Reusing Weights Let Eout S be an output embedding matrix for subwords and let us modify the softmax layer (2) so that it utilizes Eout S instead of the word embedding matrix Eout W . The idea is fairly straightforward: we reuse an embedding model (4) to construct a new word embedding matrix: \u02c6 Eout W = [E(\u03c3(w); Eout S , \u0398out) for w \u2208W], (5) and use \u02c6 Eout W instead of Eout W in the softmax layer (2). Such modi\ufb01cation of the softmax layer will be referred to as subword-based softmax. The overall architecture of a subword-aware RNNLM with subword-based softmax is given in Figure 1. Such a model allows several options for reusing embeddings and weights, which are discussed below. \u2022 Reusing neither subword embeddings nor embedding model weights: As was shown by Jozefowicz et al. (2015), this can signi\ufb01cantly reduce the total number of parameters for large models trained on huge datasets (1B tokens) with large vocabularies (800K tokens). However, we do not expect signi\ufb01cant reductions on smaller data sets (1-2M tokens) with smaller vocabularies (10-30K tokens), which we use in our main experiments. \u2022 Reusing subword embeddings (RE) can be done by setting Eout S = Ein S in (5). This will give a signi\ufb01cant reduction in model size for models with |Ein S| \u226b|\u0398in|,3 such as the morphemeaware model of Botha and Blunsom (2014). \u2022 Reusing weights of the embedding model (RW) can be done by setting \u0398out = \u0398in. Unlike the previous option, this should signi\ufb01cantly reduce sizes of models with |Ein S| \u226a|\u0398in|, such as the character-aware model of Kim et al. (2016). \u2022 Reusing both subword embeddings and weights of the embedding model (RE+RW) can be done by setting Eout S = Ein S and \u0398out = \u0398in simultaneously in (5). This should signi\ufb01cantly reduce the number of trainable parameters in any subword-aware model. Here we use exactly the same word representations both at input and at output, so this option corresponds to the reusing of plain word embeddings in pure word-level language models. 5 Experimental Setup Data sets: All models are trained and evaluated on the PTB (Marcus et al., 1993) and the WikiText2 (Merity et al., 2017) data sets. For the PTB we utilize the standard training (0-20), validation (21-22), and test (23-24) splits along with preprocessing per Mikolov et al. (2010). WikiText-2 is an alternative to PTB, which is approximately two times as large in size and three times as large 3|A| denotes number of elements in A. \fin vocabulary (Table 1). Data set T |W| |S| |M| PTB 0.9M 10K 5.9K 3.4K WikiText-2 2.1M 33K 19.5K 8.8K Table 1: Corpus statistics. T = number of tokens in training set; |W| = word vocabulary size; |S| = syllable vocabulary size; |M| = morph vocabulary size. Subword-based embedding models: We experiment with existing representational models which have previously proven effective for language modeling. \u2022 CharCNN (Kim et al., 2016) is a characteraware convolutional model, which performs on par with the 2014\u20132015 state-of-the-art wordlevel LSTM model (Zaremba et al., 2014) despite having 60% fewer parameters. \u2022 SylConcat is a simple concatenation of syllable embeddings suggested by Assylbekov et al. (2017), which underperforms CharCNN but has fewer parameters and is trained faster. \u2022 MorphSum is a summation of morpheme embeddings, which is similar to the approach of Botha and Blunsom (2014) with one important difference: the embedding of the word itself is not included into the sum. We do this since other models do not utilize word embeddings. In all subword-aware language models we inject a stack of two highway layers (Srivastava et al., 2015) right before the word-level RNNLM as done by Kim et al. (2016), and the non-linear activation in any of these highway layers is a ReLU. The highway layer size is denoted by dHW. Word-level RNNLM: There is a large variety of RNN cells to choose from in (1). To make our results directly comparable to the previous work of Inan et al. (2017), Press and Wolf (2017) on reusing word embeddings we select a rather conventional architecture \u2013 a stack of two LSTM cells (Hochreiter and Schmidhuber, 1997). Hyperparameters: We experiment with two con\ufb01gurations for the state size dLM of the word-level RNNLM: 200 (small models) and 650 (mediumsized models). In what follows values outside brackets correspond to small models, and values within brackets correspond to medium models. \u2022 CharCNN: We use the same hyperparameters as in the work of Kim et al. (2016), where \u201clarge model\u201d stands for what we call \u201cmedium-sized model\u201d. \u2022 SylConcat: dS = 50 (200), dHW = 200 (800). These choices are guided by the work of Assylbekov et al. (2017). \u2022 MorphSum: dS = dHW = 200 (650). These choices are guided by Kim et al. (2016). Optimizaton method is guided by the previous works (Zaremba et al., 2014; Gal and Ghahramani, 2016) on word-level language modeling with LSTMs. See Appendix A for details. Syllabi\ufb01cation and morphological segmentation: True syllabi\ufb01cation of a word requires its grapheme-to-phoneme conversion and then its splitting up into syllables based on some rules. True morphological segmentation requires rather expensive morphological analysis and disambiguation tools. Since these are not always available for under-resourced languages, we decided to utilize Liang\u2019s widely-used hyphenation algorithm (Liang, 1983) and an unsupervised morphological segmentation tool, Morfessor 2.0 (Virpioja et al., 2013), as approximations to syllabi\ufb01cation and morphological segmentation respectively. We use the default con\ufb01guration of Morfessor 2.0. Syllable and morpheme vocabulary sizes for both PTB and WikiText-2 are reported in Table 1. 6 Results In order to investigate the extent to which each of our proposed options bene\ufb01ts the language modeling task, we evaluate all four modi\ufb01cations (no reusing, RE, RW, RE+RW) for each subwordaware model against their original versions and word-level baselines. The results of evaluation are given in Table 2. We have both negative and positive \ufb01ndings which are summarized below. Negative results: \u2022 The \u2018no reusing\u2019 and RW options should never be applied in subword-aware language models as they deteriorate the performance. \u2022 Neither of the reusing options bene\ufb01ts CharCNN when compared to the original model with a plain softmax layer. Positive results: \u2022 The RE+RW option puts CharCNN\u2019s performance close to that of the original version, while reducing the model size by 30\u201375%. \u2022 The RE and RE+RW are the best reusing options for SylConcat, which make it on par with the original CharCNN model, despite having 35\u2013 75% fewer parameters. \fPTB Wikitext-2 Model Small Medium Small Medium Size PPL Size PPL Size PPL Size PPL Word 4.7M 88.1 19.8M 79.8 14M 111.9 50.1M 95.7 Word + reusing word emb\u2019s 2.7M 86.6 13.3M 74.5 7.3M 104.1 28.4M 89.9 CharCNN (original) 4.1M 87.3 19.4M 77.1 8.7M 101.6 34.5M 88.7 CharCNN 3.3M 97.5 18.5M 89.2 3.3M 110.6 18.5M \u2014 CharCNN + RE 3.3M 99.1 18.5M 82.9 3.3M 110.2 18.5M \u2014 CharCNN + RW 2.2M 93.5 13.6M 103.2 2.2M 111.5 13.6M \u2014 CharCNN + RE + RW 2.2M 91.0 13.6M 79.9 2.2M 101.8 13.6M \u2014 SylConcat (original) 3.2M 89.0 18.7M 77.9 8.5M 105.7 36.6M 91.4 SylConcat 1.7M 96.9 17.7M 90.5 3.1M 118.1 23.2M 114.8 SylConcat + RE 1.4M 87.4 16.6M 75.7 2.1M 101.0 19.3M 94.2 SylConcat + RW 1.6M 99.9 15.2M 96.2 2.9M 118.9 19.4M 112.1 SylConcat + RE + RW 1.2M 88.4 12.7M 76.2 1.9M 101.0 15.5M 86.7 MorphSum (original) 3.5M 87.5 17.2M 78.5 9.3M 101.9 35.8M 90.1 MorphSum 2.4M 89.0 14.5M 82.4 4.5M 100.3 21.7M 86.7 MorphSum + RE 1.6M 85.5 12.3M 74.1 2.8M 97.6 15.9M 81.2 MorphSum + RW 2.2M 89.6 12.8M 81.0 4.4M 101.4 20.0M 86.6 MorphSum + RE + RW 1.5M 85.1 10.7M 72.2 2.6M 96.5 14.2M 77.5 Table 2: Results. The pure word-level models and original versions of subword-aware models (with regular softmax) serve as baselines. Reusing the input embedding architecture at output in CharCNN leads to prohibitively slow models when trained on WikiText-2 (\u2248800 tokens/sec on NVIDIA Titan X Pascal); we therefore abandoned evaluation of these con\ufb01gurations. \u2022 The RE and RE+RW con\ufb01gurations bene\ufb01t MorphSum making it not only better than its original version but also better than all other models and signi\ufb01cantly smaller than the wordlevel model with reused embeddings. In what follows we proceed to analyze the obtained results. 6.1 CharCNN is biased towards surface form We hypothesize that the reason CharCNN does not bene\ufb01t from tied weights is that CNN over character embeddings is an excessively \ufb02exible model which learns to adapt to a surface form more than to semantics. To validate this hypothesis we pick several words4 from the English PTB vocabulary and consider their nearest neighbors under cosine similarity as produced by the medium-sized models (with the regular softmax layer) at input (Table 3). As we can see from the examples, the CharCNN model is somewhat more biased towards surface forms at input than SylConcat and 4We pick the same words as Kim et al. (2016). MorphSum.5 When CharCNN is reused to generate a softmax embedding matrix this bias is propagated to output embeddings as well (Table 3). 6.2 Tying weights bottom-up From Table 2 one can notice that tying weights without tying subword embeddings (RW) always results in worse performance than the tying both weights and embeddings (RE+RW). Recall that subword embedding lookup is done before the weights of subword-aware embedding model are used (see Figure 1). This leads us to the following Conjecture. Let Ein S = \u0398in 0 , \u0398in 1 , \u0398in 2 , . . . , \u0398in n be the parameters of the consecutive layers of a subword-aware input embedding model (4), i.e. x = x(n) = fn \u0000x(n\u22121); \u0398in n \u0001 , ..., x(1) = f1 \u0000x(0); \u0398in 1 \u0001 , x(0) = f0 \u0000\u03c3(w); Ein S \u0001 and let Eout S = \u0398out 0 , \u0398out 1 , \u0398out 2 , . . . , \u0398out n be the parameters of the consecutive layers of a subword-aware embedding model used to generate the output projection matrix (5). Let A be a subword-aware neu5A similar observation for character-aware NLMs was made by Vania and Lopez (2017). \fModel In Vocabulary Out-of-Vocabulary while his you richard trading computer-aided misinformed looooook INPUT EMBEDDINGS chile hhs god graham traded computer-guided informed look CharCNN whole its we harold tradition computerized performed looks (original) meanwhile her your edward heading computer-driven formed looking although this i ronald eroding black-and-white con\ufb01rmed looked although my kemp thomas printing computer-guided reinforced \u2014 SylConcat though historic welch robert working computer-driven surprised \u2014 (original) when your i stephen lending computerized succeeding \u2014 mean irish shere alan recording computer succeed \u2014 although mystery i stephen program-trading cross-border informed nato MorphSum whenever my ghandi leonard insider-trading bank-backed injured lesko (original) when whiskey we william relations pro-choice con\ufb01ned imo 1980s sour cadillac robert insurance government-owned formed swapo I/O EMB\u2019S thi her we gerard trades computer-guided informed look CharCNN when its your gerald trader large-scale performed outlook + RE + RW after the young william traders high-quality outperformed looks above heir why edward trade futures-related con\ufb01rmed looked Table 3: Nearest neighbors based on cosine similarity. We underline character ngrams in words which are close to the given word orthographically rather than semantically. The pyphen syllabi\ufb01er, which is used in SylConcat, failed to segment the word \u2018looooook\u2019 into syllables, and therefore its neighbors are not available. ral language model in which the \ufb01rst (j+1) layers of input and output embedding sub-networks have tied weights: \u2200i = 0, j : \u0398in i = \u0398out i , and let B be a model in which at least one layer below the (j + 1)th layer has untied weights: \u2203i = 0, j \u22121 : \u0398in i \u0338= \u0398out i , \u0398in j = \u0398out j . Then model B performs at most as well as model A, i.e. PPLA \u2264PPLB. To test this conjecture empirically, we conduct the following experiments: in all three embedding models (CharCNN, SylConcat, and MorphSum), we reuse different combinations of layers. If an embedding model has n layers, there are 2n ways to reuse them, as each layer can either be tied or untied at input and output. However, there are two particular con\ufb01gurations for each of the embedding models that do not interest us: (i) when neither of the layers is reused, or (ii) when only the very \ufb01rst embedding layer is reused. Hence, for each model we need to check 2n \u22122 con\ufb01gurations. For faster experimentation we evaluate only small-sized models on PTB. The results are reported in Table 4. As we can see, the experiments in general reject our conjecture: in SylConcat leaving an untied \ufb01rst highway layer between tied embedding and second highway layers (denote this as HW2+Emb) turned out to be slightly better than tying all three layers (HW2+HW1+Emb). Recall, that a highway is a weighted average between nonlinear and identity transformations of the incoming vector: x 7\u2192t \u2299ReLU(xA + b) + (1 \u2212t) \u2299x, where t = \u03c3(xW + c) is a transform gate, A, W, b and c are trainable parameters, and \u2299is the element-wise multiplication operator. To \ufb01nd out why leaving an untied highway below a tied one is bene\ufb01cial in SylConcat, we compare the distributions of the transform gate values t from the \ufb01rst highway layers of both con\ufb01gurations, HW2+Emb and HW2+HW1+Emb, in SylConcat and MorphSum (Figure 2). We can see that SylConcat heavily relies on nonlinearity in the \ufb01rst highway layer, while MorphSum does not utilize much of it. This means that in MorphSum, the highway is close to an identity operator (t \u22480), and does not transform the sum of morpheme vectors much, either at input or at output. Therefore, tying the \ufb01rst highway layer is natural to Morh-Sum. SylConcat, on the other hand, applies non-linear transformations to the concatenation of syllable vectors, and hence makes additional preparations of the word vector for the needs of the RNNLM at input and for Softmax prediction at output. These needs differ from each other (as shown in the next subsection). This is why SylConcat bene\ufb01ts from an additional degree of freedom when the \ufb01rst highway is left untied. Despite not being true in all cases, and due to being true in many cases, we believe that the above-mentioned conjecture is still useful. In short it can be summarized as a practical hands\fHW2 HW1 CNN Emb PPL \u2713 94.1 \u2713 \u2713 92.8 \u2713 94.6 \u2713 \u2713 94.5 \u2713 \u2713 93.1 \u2713 \u2713 \u2713 90.1 \u2713 94.9 \u2713 \u2713 99.2 \u2713 \u2713 94.1 \u2713 \u2713 \u2713 92.5 \u2713 \u2713 94.3 \u2713 \u2713 \u2713 97.8 \u2713 \u2713 \u2713 96.3 \u2713 \u2713 \u2713 \u2713 91.0 HW2 HW1 Emb PPL \u2713 95.4 \u2713 \u2713 87.4 \u2713 99.0 \u2713 \u2713 87.9 \u2713 \u2713 96.2 \u2713 \u2713 \u2713 88.4 HW2 HW1 Emb PPL \u2713 90.0 \u2713 \u2713 84.7 \u2713 89.9 \u2713 \u2713 85.7 \u2713 \u2713 89.4 \u2713 \u2713 \u2713 85.1 Table 4: Reusing different combinations of layers in small CharCNN (left), small SylConcat (top right) and small MorphSum on PTB data. \u201c\u2713\u201d means that the layer is reused at output. 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 8 Transform gate values Density Input Output Tied 0.0 0.2 0.4 0.6 0.8 1.0 0 2 4 6 8 Transform gate values Density Input Output Tied Figure 2: Kernel density estimations of the transform gate values of the \ufb01rst highway layers in SylConcat (left) and MorphSum. Values corresponding to \u2018Input\u2019 and \u2018Output\u2019 curves come from the HW2+Emb con\ufb01gurations, while those corresponding to \u2018Tied\u2019 curves come from the HW2+HW1+Emb con\ufb01gurations. on rule: Layers should be tied consecutively bottom-up, i.e. one should not leave untied layer(s) below a tied one. Keep in mind that this rule does not guarantee a performance increase as more and more layers are tied. It only says that leaving untied weights below the tied ones is likely to be worse than not doing so. 6.3 Difference between input and output embeddings One can notice from the results of our experiments (Table 4) that having an untied second highway layer above the \ufb01rst one always leads to better performance than when it is tied. This means that there is a bene\ufb01t in letting word embeddings slightly differ at input and output, i.e. by specializing them for the needs of RNNLM at input and of Softmax at output. This specialization is quite natural, as input and output representations of words have two different purposes: input representations send a signal to the RNNLM about the current word in a sequence, while output representations are needed to predict the next word given all the preceding words. The difference between input and output word representations is discussed in greater detail by Garten et al. (2015) and Press and Wolf (2017). Here we decided to verify the difference indirectly: we test whether intrinsic dimensionality of word embeddings signi\ufb01cantly differs at input and output. For this, we apply principal component analysis to word embeddings produced by all models in \u201cno reusing\u201d mode. The results are given in Figure 3, where we can see that dimensionalities of input and output embeddings differ in the word-level model, CharCNN, and SylConcat models, but the difference is less signi\ufb01cant in MorphSum model. Interestingly, in word-level and MorphSum models the output embeddings have more principal components than the input ones. In CharCNN and SylConcat, however, results are to other way around. We defer the study of this phenomenon to the future work. \f0 100 200 300 400 500 600 0.2 0.4 0.6 0.8 1.0 Input Output 0 100 200 300 400 500 0.0 0.2 0.4 0.6 0.8 1.0 Input Output 0 50 100 150 200 0.0 0.2 0.4 0.6 0.8 1.0 Input Output 0 50 100 150 200 0.2 0.4 0.6 0.8 1.0 Input Output Figure 3: PCA applied to input and word embeddings produced by different models. Horizontal axis corresponds to number of principal components, vertical axis corresponds to percentage of total variance to retain. From left to right: word-level model, CharCNN, SylConcat, MorphSum. 6.4 CharCNN generalizes better than MorphSum One may expect larger units to work better than smaller units, but smaller units to generalize better than larger units. This certainly depends on how one de\ufb01nes generalizability of a language model. If it is an ability to model unseen text with unseen words, then, indeed, character-aware models may perform better than syllableor morphemeaware ones. This can be partially seen from Table 3, where the OOV words are better handled by CharCNN in terms of in-vocabulary nearest neighbors. However, to fully validate the abovementioned expectation we conduct additional experiments: we train two models, CharCNN and MorphSum, on PTB and then we evaluate them on the test set of Wikitext-2 (245K words, 10K wordtypes). Some words in Wikitext-2 contain characters or morphemes that are not present in PTB, and therefore such words cannot be embedded by CharCNN or MorphSum correspondingly. Such words were replaced by the token, and we call them new OOVs6. The results of our experiments are reported in Table 5. Indeed, CharCNN Model # new OOVs PPL CharCNN + RE + RW 3659 306.8 MorphSum + RE + RW 4195 316.2 Table 5: Training on PTB and testing on Wikitext-2. faces less OOVs on unseen text, and thus generalizes better than MorphSum. 6.5 Performance on non-English Data According to Table 2, MorphSum+RE+RW comfortably outperforms the strong baseline Word+RE 6These are \u201cnew\u201d OOVs, since the original test set of Wikitext-2 already contains \u201cold\u201d OOVs marked as . Model FR ES DE CS RU S Word+RE 218 205 305 514 364 D-S MorphSum+RE+RW 188 171 246 371 237 M Word+RE 205 193 277 488 351 MorphSum+RE+RW 172 157 222 338 210 S Word+RE 167 149 285 520 267 D-M MorphSum+RE+RW 159 143 242 463 229 Table 6: Evaluation on non-English data. MorphSum+RE+RW has signi\ufb01cantly less parameters than Word+RE (Appendix B). S \u2014 small model, M \u2014 medium model, D-S \u2014 small data, D-M \u2014 medium data; FR \u2014 French, ES \u2014 Spanish, DE \u2014 German, CS \u2014 Czech, RU \u2014 Russian. (Inan et al., 2017). It is interesting to see whether this advantage extends to non-English languages which have richer morphology. For this purpose we conduct evaluation of both models on small (1M tokens) and medium (17M\u201351M tokens) data in \ufb01ve languages (see corpora statistics in Appendix B). Due to hardware constraints we only train the small models on medium-sized data. We used the same architectures for all languages and did not perform any language-speci\ufb01c tuning of hyperparameters, which are speci\ufb01ed in Appendix A. The results are provided in Table 6. As one can see, the advantage of the morpheme-aware model over the word-level one is even more pronounced for non-English data. Also, we can notice that the gain is larger for small data sets. We hypothesize that the advantage of MorphSum+RE+RW over Word+RE diminishes with the decrease of typetoken ratio (TTR). A scatterplot of PPL change versus TTR (Figure 4) supports this hypothesis. Moreover, there is a strong correlation between these two quantities: \u02c6 \u03c1(\u2206PPL, TTR) = 0.84, i.e. one can predict the mean decrease in PPL from the TTR of a text with a simple linear regression: \u2206PPL \u22482, 109 \u00d7 TTR. \fModel PTB WT-2 CS DE ES FR RU AWD-LSTM-Word w/o emb. dropout 61.38 68.50 410 241 145 151 232 AWD-LSTM-MorphSum + RE + RW 61.17 66 .92 253 177 126 140 162 Table 7: Replacing LSTM with AWD-LSTM. 10 20 30 40 50 60 0 20 40 60 80 100 120 140 TTR \u00d7 1000 \u2206PPL FR ES DE CS RU FR ES DE CS RU PTB WT2 small data medium data Figure 4: PPL improvement vs TTR. \u2206PPL = PPLWord+RE \u2212PPLMorphSum+RE+RW. 6.6 Replacing LSTM with AWD-LSTM The empirical perplexities in Table 2 are way above the current state-of-the-art on the same datasets (Melis et al., 2018). However, the approach of Melis et al. (2018) requires thousands of evaluations and is feasible for researchers who have access to hundreds of GPUs. Unfortunately, we do not have such access. Also, the authors do not disclose the optimal hyperparameters they found, and thus we could not reproduce their models. There is another state-of-the-art language model, AWD-LSTM (Merity et al., 2018), which has open-source code. We replaced this model\u2019s word embedding layer with the MorphSum subnetwork and fully reused morpheme embeddings and other weights of MorphSum at output. We refer to such modi\ufb01cation as AWD-LSTMMorphSum + RE + RW. We trained both models without \ufb01ne-tuning (due to time constraints) and we did not use embedding dropout (section 4.3 of Merity et al. (2018)) in either model, as it is not obvious how embeddings should be dropped in the case of AWD-LSTM-MorphSum. The results of evaluation on the PTB, Wikitext-2, and nonEnglish datasets are given in Table 7. Although AWD-LSTM-MorphSum is on par with AWD-LSTM-Word on PTB and is slightly better on Wikitext-2, replacing plain word embeddings with the subword-aware model with appropriately reused parameters is crucial for nonEnglish data. Notice that AWD-LSTM underperforms LSTM (used by us) on Czech dataset (cf. Table 6). We think that the hyperparameters of AWD-LSTM in Merity et al. (2018) are thoroughly tuned for PTB and Wikitext-2 and may poorly generalize to other datasets. 7" + }, + { + "url": "http://arxiv.org/abs/1707.06480v1", + "title": "Syllable-aware Neural Language Models: A Failure to Beat Character-aware Ones", + "abstract": "Syllabification does not seem to improve word-level RNN language modeling\nquality when compared to character-based segmentation. However, our best\nsyllable-aware language model, achieving performance comparable to the\ncompetitive character-aware model, has 18%-33% fewer parameters and is trained\n1.2-2.2 times faster.", + "authors": "Zhenisbek Assylbekov, Rustem Takhanov, Bagdat Myrzakhmetov, Jonathan N. Washington", + "published": "2017-07-20", + "updated": "2017-07-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.NE", + "stat.ML", + "68T50", + "I.2.7" + ], + "main_content": "Introduction Recent advances in neural language modeling (NLM) are connected with character-aware models (Kim et al., 2016; Ling et al., 2015b; Verwimp et al., 2017). This is a promising approach, and we propose the following direction related to it: We would like to make sure that in the pursuit of the most \ufb01ne-grained representations one has not missed possible intermediate ways of segmentation, e.g., by syllables. Syllables, in our opinion, are better supported as linguistic units of language than single characters. In most languages, words can be naturally split into syllables: ES: el par-la-men-to a-po-y\u00b4 o la en-mien-da RU: par-la-ment pod-der-\u02c7 zal po-prav-ku (EN: the parliament supported the amendment) Based on this observation, we attempted to determine whether syllable-aware NLM has any advantages over character-aware NLM. We experimented with a variety of models but could not \ufb01nd any evidence to support this hypothesis: splitting words into syllables does not seem to improve the language modeling quality when compared to splitting into characters. However, there are some positive \ufb01ndings: while our best syllable-aware language model achieves performance comparable to the competitive character-aware model, it has 18%\u201333% fewer parameters and is 1.2\u20132.2 times faster to train. 2 Related Work Much research has been done on subword-level and subword-aware1 neural language modeling when subwords are characters (Ling et al., 2015b; Kim et al., 2016; Verwimp et al., 2017) or morphemes (Botha and Blunsom, 2014; Qiu et al., 2014; Cotterell and Sch\u00a8 utze, 2015). However, not much work has been done on syllable-level or syllable-aware NLM. Mikolov et al. (2012) show that subword-level language models outperform character-level ones.2 They keep the most frequent words untouched and split all other words into syllable-like units. Our approach differs mainly in the following aspects: we make predictions at the word level, use a more linguistically sound syllabi\ufb01cation algorithm, and consider a variety of more advanced neural architectures. We have recently come across a concurrent paper (Vania and Lopez, 2017) where the authors systematically compare different subword units (characters, character trigrams, BPE (Sennrich et al., 2016), morphemes) and different representation models (CNN, Bi-LSTM, summation) on languages with various morphological typology. However, they do not consider syllables, and they experiment with relatively small models on small data sets (0.6M\u20131.4M tokens). 1Subword-level LMs rely on subword-level inputs and make predictions at the level of subwords; subword-aware LMs also rely on subword-level inputs but make predictions at the level of words. 2Not to be confused with character-aware ones, see the previous footnote. arXiv:1707.06480v1 [cs.CL] 20 Jul 2017 \funconstitutional conditions on stack of two LSTMs word vector Highway layers (optional) Syllable-aware word embedding model Syllable embeddings un con sti tu tional imposes unconstitutional conditions Figure 1: Syllable-aware language model. 3 Syllable-aware word embeddings Let W and S be \ufb01nite vocabularies of words and syllables respectively. We assume that both words and syllables have already been converted into indices. Let ES \u2208R|S|\u00d7dS be an embedding matrix for syllables \u2014 i.e., it is a matrix in which the sth row (denoted as s) corresponds to an embedding of the syllable s \u2208S. Any word w \u2208W is a sequence of its syllables (s1, s2, . . . , snw), and hence can be represented as a sequence of the corresponding syllable vectors: [s1, s2, . . . , snw]. (1) The question is: How shall we pack the sequence (1) into a single vector x \u2208RdW to produce a better embedding of the word w?3 In our case \u201cbetter\u201d means \u201cbetter than a character-aware embedding of w via the Char-CNN model of Kim et al. (2016)\u201d. Below we present several viable approaches. 3.1 Recurrent sequential model (Syl-LSTM) Since the syllables are coming in a sequence it is natural to try a recurrent sequential model: ht = f(st, ht\u22121), h0 = 0, (2) which converts the sequence of syllable vectors (1) into a sequence of state vectors h1:nw. The last 3The same question applies to any model that segments words into a sequence of characters or other subword units. state vector hnw is assumed to contain the information on the whole sequence (1), and is therefore used as a word embedding for w. There is a big variety of transformations from which one can choose f in (2); however, a recent thorough evaluation (Jozefowicz et al., 2015) shows that the LSTM (Hochreiter and Schmidhuber, 1997) with its forget bias initialized to 1 outperforms other popular architectures on almost all tasks, and we decided to use it for our experiments. We will refer to this model as Syl-LSTM. 3.2 Convolutional model (Syl-CNN) Inspired by recent work on character-aware neural language models (Kim et al., 2016) we decided to try this approach (Char-CNN) on syllables. Our case differs mainly in the following two aspects: 1. The set of syllables S is usually bigger than the set of characters C,4 and also the dimensionality dS of syllable vectors is expected to be greater than the dimensionality dC of character vectors. Both of these factors result in allocating more parameters on syllable embeddings compared to character embeddings. 2. On average a word contains fewer syllables than characters, and therefore we need narrower convolutional \ufb01lters for syllables. This results in spending fewer parameters per convolution. This means that by varying dS and the maximum width of convolutional \ufb01lters L we can still \ufb01t the parameter budget of Kim et al. (2016) to allow fair comparison of the models. Like in Char-CNN, our syllable-aware model, which is referred to as Syl-CNN-[L], utilizes maxpooling and highway layers (Srivastava et al., 2015) to model interactions between the syllables. The dimensionality of a highway layer is denoted by dHW. 3.3 Linear combinations We also considered using linear combinations of syllable-vectors to represent the word embedding: x = Pnw t=1 \u03b1t(st) \u00b7 st. (3) The choice for \u03b1t is motivated mainly by the existing approaches (discussed below) which proved to be successful for other tasks. Syl-Sum: Summing up syllable vectors to get a word vector can be obtained by setting \u03b1t(st) = 1. 4In languages with alphabetic writing systems. \fThis approach was used by Botha and Blunsom (2014) to combine a word and its morpheme embeddings into a single word vector. Syl-Avg: A simple average of syllable vectors can be obtained by setting \u03b1t(st) = 1/nw. This can be also called a \u201ccontinuous bag of syllables\u201d in an analogy to a CBOW model (Mikolov et al., 2013), where vectors of neighboring words are averaged to get a word embedding of the current word. Syl-Avg-A: We let the weights \u03b1t in (3) be a function of parameters (a1, . . . , an) of the model, which are jointly trained together with other parameters. Here n = maxw{nw} is a maximum word length in syllables. In order to have a weighted average in (3) we apply a softmax normalization: \u03b1t = softmax(a)t = exp(at) Pn \u03c4=1 exp(a\u03c4) (4) Syl-Avg-B: We can let \u03b1t depend on syllables and their positions: \u03b1t = \u03b1t(st) = softmax(ast + b)t where A \u2208RdS\u00d7n (with elements as,t) is a set of parameters that determine the importance of each syllable type in each (relative) position, b \u2208Rn is a bias, which is conditioned only on the relative position. This approach is motivated by recent work on using an attention mechanism in the CBOW model (Ling et al., 2015a). We feed the resulting x from (3) into a stack of highway layers to allow interactions between the syllables. 3.4 Concatenation (Syl-Concat) In this model we simply concatenate syllable vectors (1) into a single word vector: x = [s1; s2; . . . ; snw; 0; 0; . . . ; 0 | {z } n\u2212nw ] We zero-pad x so that all word vectors have the same length n \u00b7 dS to allow batch processing, and then we feed x into a stack of highway layers. 4 Word-level language model Once we have word embeddings x1:k for a sequence of words w1:k we can use a word-level RNN language model to produce a sequence of states h1:k and then predict the next word according to the probability distribution Pr(wk+1|w1:k) = softmax(hkW + b), where W \u2208RdLM\u00d7|W|, b \u2208R|W|, and dLM is the hidden layer size of the RNN. Training the model involves minimizing the negative log-likelihood over the corpus w1:K: \u2212PK k=1 log Pr(wk|w1:k\u22121) \u2212 \u2192min (5) As was mentioned in Section 3.1 there is a huge variety of RNN architectures to choose from. The most advanced recurrent neural architectures, at the time of this writing, are recurrent highway networks (Zilly et al., 2017) and a novel model which was obtained through a neural architecture search with reinforcement learning (Zoph and Le, 2017). These models can be spiced up with the most recent regularization techniques for RNNs (Gal and Ghahramani, 2016) to reach state-of-theart. However, to make our results directly comparable to those of Kim et al. (2016) we select a two-layer LSTM and regularize it as in Zaremba et al. (2014). 5 Experimental Setup We search for the best model in two steps: \ufb01rst, we block the word-level LSTM\u2019s architecture and pre-select the three best models under a small parameter budget (5M), and then we tune these three best models\u2019 hyperparameters under a larger budget (20M). Pre-selection: We \ufb01x dLM (hidden layer size of the word-level LSTM) at 300 units per layer and run each syllable-aware word embedding method from Section 3 on the English PTB data set (Marcus et al., 1993), keeping the total parameter budget at 5M. The architectural choices are speci\ufb01ed in Appendix A. Hyperparameter tuning: The hyperparameters of the three best-performing models from the preselection step are then thoroughly tuned on the same English PTB data through a random search according to the marginal distributions: \u2022 dS \u223cU(20, 650),5 \u2022 log(dHW) \u223cU(log(160), log(2000)), \u2022 log(dLM) \u223cU(log(300), log(2000)), with the restriction dS < dLM. The total parameter budget is kept at 20M to allow for easy comparison to the results of Kim et al. (2016). Then these three best models (with their hyperparameters tuned on PTB) are trained and evaluated on small(DATAS) and medium-sized (DATA-L) data sets in six languages. 5U(a, b) stands for a uniform distribution over (a, b). \fModel PPL Model PPL LSTM-Word 88.0 Char-CNN 92.3 Syl-LSTM 88.7 Syl-Avg 88.5 Syl-CNN-2 86.6 Syl-Avg-A 91.4 Syl-CNN-3 84.6 Syl-Avg-B 88.5 Syl-CNN-4 86.8 Syl-Concat 83.7 Syl-Sum 84.6 Table 1: Pre-selection results. PPL stands for test set perplexity, all models have \u22485M parameters. Model dS dHW dLM Size PPL Syl-CNN 242 1170 380 15M 80.5 Syl-Sum 438 1256 435 18M 80.3 Syl-Concat 228 781 439 13M 79.4 Table 2: Hyperparameters tuning. In Syl-CNN, dHW is a function of the primary hyperparameter c = 195 (see Appendix A). Optimizaton is performed in almost the same way as in the work of Zaremba et al. (2014). See Appendix B for details. Syllabi\ufb01cation: The true syllabi\ufb01cation of a word requires its grapheme-to-phoneme conversion and then splitting it into syllables based on some rules. Since these are not always available for lessresourced languages, we decided to utilize Liang\u2019s widely-used hyphenation algorithm (Liang, 1983). 6 Results The results of the pre-selection are reported in Table 1. All syllable-aware models comfortably outperform the Char-CNN when the budget is limited to 5M parameters. Surprisingly, a pure word-level model,6 LSTM-Word, also beats the character-aware one under such budget. The three best con\ufb01gurations are Syl-Concat, Syl-Sum, and Syl-CNN-3 (hereinafter referred to as Syl-CNN), and tuning their hyperparameters under 20M parameter budget gives the architectures in Table 2. The results of evaluating these three models on small (1M tokens) and medium-sized (17M\u2013 57M tokens) data sets against Char-CNN for different languages are provided in Table 3. The models demonstrate similar performance on small data, but Char-CNN scales signi\ufb01cantly better on medium-sized data. From the three syllable-aware models, Syl-Concat looks the most advantageous as it demonstrates stable results and has the least 6When words are directly embedded into RdW through an embedding matrix EW \u2208R|W|\u00d7dW . 7Syl-CNN results on DATA-L are not reported since computational resources were insuf\ufb01cient to run these con\ufb01gurations. Model EN FR ES DE CS RU Char-CNN 78.9 184 165 239 371 261 DATA-S Syl-CNN 80.5 191 172 239 374 269 Syl-Sum 80.3 193 170 243 389 273 Syl-Concat 79.4 188 168 244 383 265 Char-CNN 160 124 118 198 392 190 DATA-L Syl-CNN7 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 Syl-Sum 170 141 129 212 451 233 Syl-Concat 176 139 129 225 449 225 Table 3: Evaluation of the syllable-aware models against Char-CNN. In each case the smallest model, Syl-Concat, has 18%\u201333% less parameters than Char-CNN and is trained 1.2\u20132.2 times faster (Appendix C). number of parameters. Therefore in what follows we will make a more detailed comparison of SylConcat with Char-CNN. Shared errors: It is interesting to see whether Char-CNN and Syl-Concat are making similar errors. We say that a model gives an error if it assigns a probability less than p\u2217to a correct word from the test set. Figure 2 shows the percentage of errors which are shared by Syl-Concat and CharCNN depending on the value of p\u2217. We see that the 0.0 0.1 0.2 0.3 0.4 0.5 threshold p* 84 86 88 90 92 94 96 98 100 % of common errors EN FR ES DE CS RU 0.0 0.1 0.2 0.3 0.4 0.5 threshold p* 88 90 92 94 96 98 % of common errors EN FR ES DE CS RU Figure 2: Percentage of errors shared by both Syl-Concat and Char-CNN on DATA-S (left) and DATA-L (right). vast majority of errors are shared by both models even when p\u2217is small (0.01). PPL breakdown by token frequency: To \ufb01nd out how Char-CNN outperforms Syl-Concat, we partition the test sets on token frequency, as computed on the training data. We can observe in Figure 3 that, on average, the more frequent the word is, the bigger the advantage of Char-CNN over Syl-Concat. The more Char-CNN sees a word in different contexts, the more it can learn about this word (due to its powerful CNN \ufb01lters). SylConcat, on the other hand, has limitations \u2013 it cannot see below syllables, which prevents it from extracting the same amount of knowledge about the word. \f1 2 3 4 5 6 log-frequency \u221218 \u221216 \u221214 \u221212 \u221210 \u22128 \u22126 \u22124 percentage EN FR ES DE Figure 3: PPL reduction by token frequency, CharCNN relative to Syl-Concat on DATA-L. Model 80% 90% 95% 99% Char-CNN 568 762 893 1038 Syl-Concat 515 729 875 1035 Table 4: Number of principle components when PCA is applied to word embeddings produced by each model, depending on % of variance to retain. PCA of word embeddings: The intrinsic advantage of Char-CNN over Syl-Concat is also supported by the following experiment: We took word embeddings produced by both models on the English PTB, and applied PCA to them.8 Regardless of the threshold percentage of variance to retain, the embeddings from Char-CNN always have more principal components than the embeddings from Syl-Concat (see Table 4). This means that Char-CNN embeds words into higher dimensional space than Syl-Concat, and thus can better distinguish them in different contexts. LSTM limitations: During the hyperparameters tuning we noticed that increasing dS, dHW and dLM from the optimal values (in Table 2) did not result in better performance for Syl-Concat. Could it be due to the limitations of the word-level LSTM (the topmost layer in Fig. 1)? To \ufb01nd out whether this was the case we replaced the LSTM by a Variational RHN (Zilly et al., 2017), and that resulted in a signi\ufb01cant reduction of perplexities on PTB for both Char-CNN and Syl-Concat (Table 5). Moreover, increasing dLM from 439 to 650 did result in better performance for Syl-Concat. Optimization details are given in Appendix B. Comparing syllable and morpheme embeddings: It is interesting to compare morphemes and syllables. We trained Morfessor 2.0 (Creutz and 8We equalized highway layer sizes dHW in both models to have same dimensions for embeddings. In both cases, word vectors were standardized using the z-score transformation. Model depth dLM Size PPL RHN-Char-CNN 8 650 20M 67.6 RHN-Syl-Concat 8 439 13M 72.0 RHN-Syl-Concat 8 650 20M 69.4 Table 5: Replacing LSTM with Variational RHN. Lagus, 2007) in its default con\ufb01guration on the PTB training data and used it instead of the syllabi\ufb01er in our models. Interestingly, we got \u22483K unique morphemes, whereas the number of unique syllables was \u22486K. We then trained all our models on PTB under 5M parameter budget, keeping the state size of the word-level LSTM at 300 (as in our pre-selection step for syllable-aware models). The reduction in number of subword types allowed us to give them higher dimensionality dM = 100 (cf. dS = 50).9 Convolutional (Morph-CNN-3) and additive (Morph-Sum) models performed better than others with test set PPLs 83.0 and 83.9 respectively. Due to limited amount of time, we did not perform a thorough hyperparameter search under 20M budget. Instead, we ran two con\ufb01gurations for MorphCNN-3 and two con\ufb01gurations for Morph-Sum with hyperparameters close to those, which were optimal for Syl-CNN-3 and Syl-Sum correspondingly. All told, our best morpheme-aware model is Morph-Sum with dM = 550, dHW = 1100, dLM = 550, and test set PPL 79.5, which is practically the same as the result of our best syllable-aware model Syl-Concat (79.4). This makes Morph-Sum a notable alternative to CharCNN and Syl-Concat, and we defer its thorough study to future work. Source code: The source code for the models discussed in this paper is available at https://github.com/zh3nis/lstm-syl. 7" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file