[ { "url": "http://arxiv.org/abs/2404.17983v1", "title": "TI-ASU: Toward Robust Automatic Speech Understanding through Text-to-speech Imputation Against Missing Speech Modality", "abstract": "Automatic Speech Understanding (ASU) aims at human-like speech\ninterpretation, providing nuanced intent, emotion, sentiment, and content\nunderstanding from speech and language (text) content conveyed in speech.\nTypically, training a robust ASU model relies heavily on acquiring large-scale,\nhigh-quality speech and associated transcriptions. However, it is often\nchallenging to collect or use speech data for training ASU due to concerns such\nas privacy. To approach this setting of enabling ASU when speech (audio)\nmodality is missing, we propose TI-ASU, using a pre-trained text-to-speech\nmodel to impute the missing speech. We report extensive experiments evaluating\nTI-ASU on various missing scales, both multi- and single-modality settings, and\nthe use of LLMs. Our findings show that TI-ASU yields substantial benefits to\nimprove ASU in scenarios where even up to 95% of training speech is missing.\nMoreover, we show that TI-ASU is adaptive to dropout training, improving model\nrobustness in addressing missing speech during inference.", "authors": "Tiantian Feng, Xuan Shi, Rahul Gupta, Shrikanth S. Narayanan", "published": "2024-04-27", "updated": "2024-04-27", "primary_cat": "cs.SD", "cats": [ "cs.SD", "cs.CL", "eess.AS" ], "label": "Original Paper", "paper_cat": "LLM Fairness", "gt": "Text-To-Speech (TTS): TTS aims to generate human-like speech from a given text input. Modern TTS models typically include two modules: 1) an acoustic module that maps intermediate acoustic features from the input text (Tacotrons [34, 27], FastSpeechs [26, 25], and et al.); and, 2) a waveform module that transforms the acoustic feature to the audio (WavNet [21], NSF [33], HiFi-GAN [13], DiffWave [14], and et al.) With the rise of foundation model in language and speech, SpeechT5 [1] and VALL-E(X) [32, 39] learns aligned speech and language representations with self-supervised learning to synthesis speech. Missing Modality: How to leverage a data-driven model with fragmented input? A straightforward method is to remove the incomplete samples [6, 20]. Instead of removal, another direction is to reconstruct the missing modality from joint representation space [23, 40] or bayesian-meta-learning [19]. With the prevalence of the transformer in multi-model, multiple threads modifications are proposed to extend the transformer for the missing modality input, such as feature-reconstruction [36], tag-based encoding [37], multi-task modeling [18], and prompt learning [15]. However, existing literature studies this issue from the glass-half-empty perspective, while TI-ASU reframes this problem from the glass-half-full perspective with the assistance of generative AI to boost low-resource speech training.", "pre_questions": [], "main_content": "Introduction Speech understanding is fundamental to human communication as the rich information conveyed through speech allows efficient and explicit ways to exchange thoughts, emotions, and ideas. Within the development of conversational AI, automatic speech understanding (ASU) aims to accurately and comprehensively interpret input speech using advanced ML techniques, including bridging potential communication gaps between people from diverse backgrounds and language capabilities. With the advances in mobile computing, one popular application of ASU has been virtual assistants, such as Amazon Alexa and Apple Siri. In the deep learning era, the success of data-driven ASU relies heavily on the quality and diversity of the data used for training the models. To ensure that the ASU model reaches a fair convergence, an ideal corpus is expected to contain an adequate amount of complete audio samples that cover diverse speech attributes, such as emotion, intonation, and speaker demographics [9]. However, multiple practical constraints\u2013both technological and human\u2013hinge on the collection of the ASU datasets, such as hardware instability, imbalances in data resources, and the need for privacy protection. To deal with limitations to access speech samples, it is hence necessary to modify the ASU toward robustness to \u2018missing\u2019 speech. In this work, we focus on investigating the ASU when speech is increasingly missing from the corpus, and in some extreme cases, only the label of the speech is provided during the training. Previously, [40, 17] approached this problem by projecting multi-modal data to a unified feature space from which the missing modality can be reconstructed during inference. Inspired by the recent surge of generative AI, some attempts [12, 11, 38] incorporate synthetic data in the training to boost the model Preprint. Under review. arXiv:2404.17983v1 [cs.SD] 27 Apr 2024 BF \u2026 Log-mel \u2026 ASR Transducer Loss investigation starts with a relatively trivial case where m while testing data are with complete speech-text data Loss e whe -text d K+1 beamformed outputs s as D train and D train, r Pre-trained Source trained Source Separation Log-mel ASR Transducer Loss Audio Partner Loss Wearer Audio Partner ASR Transducer Loss Loss IPDs ASR Directional ASR from being used for training ASU models modality: speech missing in training or f Two-stage Fusion ASR Ba Ba Test Complete Training Missing Training V Speech Backbone Speech Backbone Text ckbon Text Backbone Fusion MLPs Emotions Sentiment Intent \u2026\u2026 Intent \u2026\u2026 lete ning sing Test Missing Visual Missing ? ? ? ? ining Missing ? esu sc Training Missing k, we propose T Test Missing Test Complete ? ? Training Missing Figure 1: Problem formulation of missing modalities in this work with ASU. The missing speech modality includes cases in training data alone or any data (both training data and testing data). performance in zero-shot setting. Moreover, [31, 8, 2] have demonstrated that synthetic speech can serve as data augmentation to boost speech recognition. ? Training Missing In this work, we propose TI-ASU, a Text-to-speech Imputation approach for Automatic Speech Understanding that addresses missing modality challenges in ASU applications. The core idea of TI-ASU is to impute the missing speech modality with TTS models from text transcriptions. Extensive experimental results demonstrate the effectiveness of TI-ASU in various missing scale, singleand multi-modality scenarios. Directional ASR The present work focuses on studying multi-modal learning for ASU with missing speech modality, as shown in Figure 1 1. Our motivation to study scenarios with missing speech arises from the fact that speech data carry sensitive information about an individual, such as biometric fingerprints and demographic and health status. In such instances, a common assumption is that the text information may remain accessible by deploying efficient ASR models on the edge, similar to Figure 2. However, the speech data is not allowed to egress out of the user devices due to privacy risks, preventing them from being used for training ASU models. Overall, we study two distinct scenarios of missing speech modality: speech missing in training or from any data setting, both in training and testing. 3.1 Speech-Missing in Training Data BF \u2026 Log-mel \u2026 ASR Transducer Loss K+1 beamformed outputs Our investigation starts with a relatively trivial case where missing speech only occurs in the training data while testing data are with complete speech-text data. Following the notation conventions for missing modalities in [15], we define the set of complete training data samples and text-only training samples as DC train and DT train, respectively. Specifically, we represent DC train = {xT i , xS i , yi} and K+1 beamformed outputs 1The figure in this paper uses images from https://openmoji.org/ 2 ASR ? ? Edge Devices Speech Transcr Edge Devices ASR ASR Server Figure 2: Illustration of edge devices performing ASR services, where text modality is always present. BF \u2026 Log-mel \u2026 ASR Transducer Loss K+1 beamformed outputs s ASR Transducer Loss IPDs ASR Directional ASR \u201cKnitting\u201d \u201cPreparing Pasta\u201d \u2026\u2026 Action Names Label-only \u201cDrinking Coffee\u201d Prompt Generation GPT-assisted \u2026\u2026 \u201cA photo of a man Knitting\u201d \u201cA photo of a woman Preparing Pasta\u201d \u2026\u2026 \u201cA photo of a person Drinking Coffee\u201d \u201cThe craft of creating fabric by interlocking loops of yarn with knitting needles or a knitting machine.\u201d \u201cPreparing pasta refers to the process of cooking pasta by boiling it until it reaches the desired level of tenderness.\u201d \u2026\u2026 \u201cEnjoying a cup of a beverage made by infusing ground coffee beans with hot water, often accompanied by milk.\u201d \u2026\u2026 \u2026\u2026 \u2026\u2026 Text-to-Image Text-to-Image ChatGPT Generated Data Data ? Imputation Imputed Data ? Training Missing Text-To-Speech Speech Generation Prompt Learning Imputed Data \u2026\u2026 Imputed Data Dropout Training \u2026 Learning Algorithm Multimodal Model Generated Data Training Data Generated Data Training Data Testing Complete ? Testing Missing Imputation Multimodal Learning Inference ? ? Figure 3: Learning framework of TI-ASU: Imputing missing speech modality with synthetic speech content through text-to-speech transformer models for robust automatic speech understanding. DT train = {xT i , yi}, where i \u2208N, T and S represent text and speech modalities, respectively. In the end, the training dataset can be expressed as Dtrain = {DC train, DT train}. Without loss of generality, we introduce the speech modality missing ratio presented in training data, denoted as p. Namely, p = 0% indicates the special case that no speech samples are missing. 3.2 Speech-Missing in Any Data Apart from exploring the scenario with speech missing in training data, we further study cases where speech data can be missing in both training and testing data. Previous studies on multimodal learning have highlighted that multimodal models, such as multimodal transformers, are frequently sensitive to modality missing during inference, resulting in substantial performance drops when test samples contain missing modalities. On the other hand, missing modality can be regarded as a robustness challenge in multimodal learning, as the missing modality is a unique case of data perturbation. In summary, we can denote the test dataset as Dtest = {DC test, DT test} following our notation established in the previous subsection. We define the speech modality missing ratio in testing data as q, with q = 100% representing the scenario where only text modality is present in the testing set. 4 TI-ASU Framework 4.1 Pre-trained Models This work experiments with widely adopted pre-trained models as the backbone for modeling ASU, specifically choosing WavLM [7] and RoBERTa [16] as the speech and text encoders, respectively. The WavLM, a model trained with self-supervised learning, demonstrates promising performance in a wide range of speech-centered tasks. This model is trained with multiple learning objectives, such as frame prediction, speech enhancement, and speech prediction. Our experiments employ the WavLM Base Plus model, consisting of 12 encoding layers with approximately 90M parameters. On the other hand, 3 Table 1: Summary of dataset statistics used in this work. Datasets Speaker # Classes Utterance # IEMOCAP 10 4 5,531 MSP-Improv 12 4 7,798 SLURP 177 46 72,277 SLUE 140 3 7,231 RoBERTa is a transformer model for text classification based on BERT, benefiting from using a much larger training corpus and modified training objectives than BERT. 4.2 End-to-End Downstream Modeling Our modeling draws inspiration from [10], highlighting the effectiveness of combining speech and text information for ASU. Speech: Similarly to [22], our speech downstream model utilizes learnable weight parameters to combine hidden outputs from all encoder layers. The combined speech output is then fed into two 1D pointwise convolutional layers with a filter size of 256. Finally, global average pooling is applied to the convolutional layer output, resulting in an output vector of size 256. Text: In contrast to speech modeling, we apply a 2-layer bidirectional GRU of a hidden size of 256 to the last hidden output from the RoBERTa model. The output from the GRU layers is averaged to obtain an output embedding of size 256. Multimodal Fusion: In the last stage, we concatenate the text embedding and speech embedding into a multimodal embedding, which is subsequently fed into two fully connected layers for the ASU classification tasks. We want to highlight that speech-only or text-only models respectively feed speech embedding and text embeddings into the classifiers without the fusion. 4.3 Speech Data Generation The core idea behind TI-ASU is to impute the missing speech modality (as shown in Figure 3) with transformer-based TTS models. To do so, we prompt the TTS models with transcriptions to synthesize speech data. Prior research on visual recognition [28, 12] has demonstrated that diversity is needed in training data generation. For example, researchers propose to include multi-domain knowledge to enrich the diversity of image generation by augmenting the domain \"photo\" to other domains such as drawing and painting. In contrast to the diversity enhancement approaches in image generation, our proposed TI-ASU enriches the generation diversity by employing multiple TTS models in the generation. This differs from image generation, where diversity is associated with the prompt message. In speech generation, however, the input transcription to the TTS model remains the same. Here, we deploy 3 TTS: Speech-T5 [1], MMS-En [24], and Vall-E X [39]. 4.4 Training with Speech Imputation The multi-modal learning framework in TI-ASU is presented in Figure 3. Without loss of generality, we define the generated speech dataset as DS\u2032 train = {xS\u2032 j , yj}, where xS\u2032 j is the generated speech from the text xT j and S\u2032 denotes the generated speech. More concretely, given a TTS model G, we can define xS\u2032 j = G(xT j ). We impute each text-only data during each training epoch by randomly selecting a generated speech from the generation set involving three TTS models. Consequently, we obtain a modality-complete dataset \u02c6 D = {DC, \u02c6 DT } with speech data imputation, where \u02c6 DT = {xT i , xS\u2032 i , yi}. Finally, we perform multi-modal training with the imputed dataset \u02c6 D. It is worth noting that TI-ASU can integrate with other multi-modal learning algorithms, such as dropout training and prompt learning, as shown in Figure 3. 4 5 Datasets We utilize four datasets \u2013 IEMOCAP [4], MSP-Improv [5], SLURP [3], and SLUE-Voxceleb [29] \u2013 to evaluate TI-ASU across three ASU-related tasks: speech emotion recognition, spoken language understanding, and speech sentiment analysis. Notably, the first two datasets are employed for the identical task. 6 Experimental Details 6.1 Speech Data Generation As mentioned previously, we apply 3 TTS experts to generate speech samples: Speech-T5, MMS-En, and Vall-E X. We generate three speech samples for each text-only sample using these TTS models. In particular, in the context of emotion recognition, we generate speech samples by introducing emotion styles associated with the training data when utilizing the Vall-E X model. We would emphasize that Vall-E X derives the emotion styles from datasets distinct from IEMOCAP and MSP-Improv datasets, avoiding introducing in-domain knowledge in the generation process. Most importantly, we choose not to impute missing data in the test set to prevent introducing artifacts in the evaluation. 6.2 Evaluation and Hyperparameters Evaluation: We use unweighted average recall (UAR) to evaluate emotion recognition, while we utilize the F1 score to evaluate intent and sentiment classifications. We conducted the experiments with 5-fold and 6-fold cross-validation on the IEMOCAP and MSP-Improv datasets, respectively. Moreover, we performed the training with 3 random seeds and reported the average performance on the remaining datasets. For the Slurp dataset, we use standard splits for training, validation, and testing, while for the SLUE-Voxceleb dataset, we split 20% of the default training data for validation. We report performance on the default validation set, given that the label of the test set is unavailable from the public SLUE-Voxceleb data. Hyperparameters: In all experiments, including fine-tuning for natural and imputed datasets, we set the batch size as 64. Specifically, we set the learning rate at 0.0005 and the maximum training epoch as 30 for training all speech models. We apply a learning rate of 0.0001 and a maximum training epoch of 20 for text and multimodal training, as we observed faster training convergence with text modality. The experiments are conducted on a high-performance computing server with A40 GPUs. We use the checkpoints of each pre-trained model from HuggingFace [35]. 7 Can TI-ASU Improve Data Efficiency with Speech-Missing in Training? In this section, we present the results of ASU with speech missing in the training. Specifically, we implement the following approaches for comparisons: \u2022 Text Training: Given that a substantial portion of speech is missing from our experiments, one natural baseline is to rely solely on the text for ASU. This baseline is essentially a text-only model. \u2022 Speech Training: In addition to the text-only model, another baseline is to train a speech model with the speech data available. Given the missing ratio p, we train the model with 100 \u2212p% speech. \u2022 Multimodal Training: Another extension to speech training with limited speech is adding the paired text. Specifically, given speech missing ratio p, we train with 100 \u2212p% speech-text data. \u2022 Multi-modal Training with Zero-filling Imputation: The approaches above use a portion of available data. To utilize all the available data, we impute the missing images by filling them with zeros as in [15]. For example, we fill p% speech data with zero in the multi-modal training. \u2022 TI-ASU-S: Speech-only training based on TI-ASU. \u2022 TI-ASU-MM: Multimodal training based on TI-ASU. 5 Table 2: Comparisons among text, speech, and multi-modal models across different datasets. We compare different models with p = 0%, where p indicates speech missing ratio in training. Text-Only Speech-Only Multi-modal IEMOCAP 62.8 69.2 71.4 MSP-Improv 52.7 63.7 65.6 Slurp 78.8 59.2 80.3 SLUE-Voxceleb 51.4 45.1 50.0 Table 3: Comparisons of speech training and TI-ASU-S. Here, TI-ASU-S is trained with pure synthetic data (p = 100%), and speech training uses complete real speech for training (p = 0%). p indicates the speech missing ratio in training data. Speech Training TI-ASU-S p = 0% p = 100% IEMOCAP 69.2 52.7 MSP-Improv 63.7 44.4 Slurp 59.2 55.5 SLUE-Voxceleb 45.1 41.2 7.1 Would text alone be enough for ASU? Text modality alone is effective in diverse language-related tasks. Therefore, our investigation begins with whether text data alone achieves competitive ASU performance. To answer this, we compare the text model with speech and multi-modal models trained with complete data, as shown in Table 2. The results demonstrate that multimodal models consistently yield benefits in emotion recognition, while text data exhibits competitive or even better performances compared to multimodal models in other tasks. However, we find that speech models often underperform text and multimodal models. In summary, our experiments highlight, not surprisingly, that speech information can benefit emotion recognition (leveraging the acoustic variation in emotion expression) but does not necessarily contribute to sentiment and intent classifications (which are largely driven by language information). 7.2 Can TI-ASU-S Provide Competitive Zero-shot Performance Without Real Speech? Zero-shot Performance with Synthetic Speech: While speech data may not consistently benefit multimodal ASU, as shown in Table 2, it is worth investigating whether relying solely on synthetic speech can result in a better-performed speech model. Accordingly, this involves a speech missing ratio p = 100% in training. Table 3 compares speech training without missing and TI-ASU-S with p = 100%. The results reveal that real speech exhibits substantial advantages in emotion recognition, while synthetic speech alone can yield competitive speech models for intent and sentiment classification. Even though synthetic speech data cannot replace real speech data, our findings suggest their potential as a valuable training resource. Multiple TTS in Speech Generation: We further explore the effectiveness of our proposed method in enhancing generation diversity through multiple TTS for ASU training. We compare the TI-ASU-S using individual TTS generation versus our previously mentioned combined approach. The comparisons in Figure 4 suggest that augmenting speech generation by deploying multiple TTS models brings substantial benefits to ASU training, leading to consistent performance increases compared to single TTS generation. Therefore, the remaining results of TI-ASU are based on Multiple TTS generations. 7.3 Can TI-ASU-S Improve Model Performance with Limited Real Speech? Table 3 shows the importance of real speech data in training ASU, leading us to study training ASU models with limited available speech data. p indicates the speech-missing ratio in training. TI-ASU-S with p = 95% (extreme missing condition): To begin with, we study severe speechmissing conditions, including cases where the speech-missing ratio is p = 95%. Table 4 compares the performance between speech training and TI-ASU-S at p = 95%. The results indicate that our proposed TI-ASU-S can consistently outperform real-speech training, where 95% of speech is 6 F1 49 51 53 MMS-En Speech-T5 Valle-X Multiple TTS SLUE-Voxceleb F1 40 50 60 MMS-En Speech-T5 Valle-X Multiple TTS Slurp UAR 35 40 45 MMS-En Speech-T5 Valle-X Multiple TTS MSP-Improv UAR 35 45 55 MMS-En Speech-T5 Valle-X Multiple TTS IEMOCAP Figure 4: Comparisons among using single TTS generation and multiple generation in TI-ASU. Here, the training set is entirely based on synthetic speech data. Table 4: Comparisons of speech models trained using real speech and TI-ASU-S. Here, we present real speech training and TI-ASU-S with p = 50% and p = 95%. p indicates the speech missing ratio in training data. p = 95% p = 50% Speech TI-ASU-S Speech TI-ASU-S IEMOCAP 52.5 54.8 65.7 65.8 MSP-Improv 44.4 48.3 61.5 60.5 Slurp < 5% 56.0 50.4 58.0 SLUE-Voxceleb 34.4 41.2 40.4 45.7 missing. We identify that this performance increase is substantial in intent classification, where the speech training fails to converge. TI-ASU-S with p = 50% (moderate missing condition): In addition to cases where a significant portion of the speech data is missing, we extend our investigation to less severe scenarios where only half of the speech is unavailable. The results in Table 4 demonstrate that even with more real speech, TI-ASU-S can yield better performances compared to real speech training in most datasets. Similar to results in cases where p = 95%, we identify that TI-ASU-S provides notable performance improvements in intent classification. However, the performance benefit is smaller at p = 50% compared to p = 95%. 7.4 Can TI-ASU-MM Provide Competitive Multimodal Models with No Real Speech? Apart from assessing TI-ASU in assisting speech training for ASU, we further investigate multimodal learning, incorporating text modality into ASU training. Similar to TI-ASU-S experiments, we begin with TI-ASU-MMwhere speech missing ratio p = 100%, indicating that no real speech is available in the training. Table 5 compares TI-ASU-MM, involving pure synthetic speech with text, against text-only and multimodal training. The latter two are with complete data in training. The results indicate that TI-ASU-MM can improve emotion recognition compared to text-only training, leading to 5-6% performance increase. However, TI-ASU-MM underperforms multimodal learning in emotion recognition. On the other hand, we find that TI-ASU-MM does not improve intent or sentiment classification, where the text-only model provides the best performances. 7.5 Can TI-ASU-MM Improve Multimodal Performance with Limited Real Speech? TI-ASU-MM with p = 95% (extreme missing condition): Similar to the TI-ASU experiments where we compared TI-ASU with speech training of limited speech data, we compare TI-ASU-MM with multimodal training with data that both modalities are present. In addition, we compare TI-ASU-MM with multimodal training using zero-filling imputation on missing speech samples as shown in Table 6. Similar to findings from speech training, we observe that TI-ASU-MM benefits emotion recognition in multimodal learning the most. In addition, this advantage is also exhibited in intent classification but 7 Table 5: Comparing TI-ASU-MM with models trained using complete text and multimodal data. Here, TI-ASU-MM is with p = 100%, meaning no real speech is used in training but synthetic speech. Text Multimodal TI-ASU-MM p = 100% IEMOCAP 62.8 71.4 67.1 MSP-Improv 52.7 65.6 58.0 Slurp 78.8 80.3 79.4 SLUE 51.4 50.0 51.0 Table 6: Comparing TI-ASU-MM with multimodal learning using available speech-text pairs and zero-filling imputation on missing speech. Here, we experiment with the condition where p = 95%. p indicates the speech missing ratio in training data. MM MM Zero Filling TI-ASU-MM Imputation IEMOCAP 52.8 63.9 67.1 MSP-Improv 32.8 53.2 57.9 Slurp 45.9 78.7 80.4 SLUE 42.5 51.4 50.6 not in sentiment classification. Overall, the results support that TI-ASU enhances ASU performance in multimodal learning by imputing missing speech with TTS models. TI-ASU-MM with Lower p: Moreover, we assess TI-ASU-MM where limited real speech is accessible. Given our findings suggesting marginal or no benefits of TI-ASU in intent and sentiment classification, we focus this analysis on emotion recognition. We plot the comparisons between TI-ASU-MM and multimodal learning with zero-filling imputation on IEMOCAP and MSP-Improv, as shown in Figure 5. The plot highlights that TI-ASU-MM is more effective in enhancing model performance when the speech missing ratio is significant (e.g., p > 70%), and multimodal learning with zero-filling imputation delivers comparable performances to TI-ASU-MM when 50% of speech is available. 8 Can TI-ASU Enhance Robustness with Speech-Missing in Train and Test? This section extends our previous experiment from the speech missing in training data to any data. Dropout training [18] has been widely applied to enhance the robustness of the multimodal models against missing modalities during inference. In the dropout training, a random portion of the selected modality is replaced with zeros for each batch. In this paper, we perform dropout on speech modality. Specifically, we propose TI-ASU Dropout, combing the dropout training with TI-ASU, where we randomly fill the data in the imputed dataset \u02c6 D = {DC, \u02c6 DT } with zero. Moreover, our baseline is MM-Dropout, multimodal dropout training with complete data, which is the baseline in [15, 18]. The dropout training uses the dropout rate equal to the test speech missing ratio q. We evaluate the model with the missing ratio q = {50%, 70%, 90%}. Can TI-ASU Dropout outperform MM-Dropout when p = 95% (extreme missing condition)? Here, we train TI-ASU with p = 95%, meaning that only 5% of real speech is available. The comparisons between MM-Dropout and TI-ASU Dropout are listed in Table 7. The results show that TI-ASU with dropout training is more robust than the multi-modal dropout training in intent and sentiment classifications. However, we observe this difference is marginal. In contrast, MM dropout outperforms TI-ASU Dropout in emotion recognition when q is low. Can TI-ASU Dropout outperform MM-Dropout when p = 50% (moderate missing condition)? We further investigate TI-ASU Dropout with more real data in training. The comparison in Table 7 shows that TI-ASU Dropout at p = 50% yields competitive emotion recognition performance compared to MM-dropout at different q. In addition, we find marginal differences between TI-ASU Dropout and multimodal dropout in the remaining tasks. Overall, these results suggest that, at a lower p, TI-ASU Dropout enhances model robustness against test speech missing. 8 Speech Missing Ratio (p) UAR 45 55 65 75 50 60 70 80 90 100 MMZero-filling Imputation GTI-ASU-MM MSP-Improv Speech Missing Ratio (p) UAR 55 65 75 50 60 70 80 90 100 MMZero-filling Imputation GTI-ASU-MM IEMOCAP Figure 5: Comparisons of multimodal training between GTI-ASU and zero-filling imputation on missing speech. Speech missing ratio in training data p \u2208{50%, 70%, 90%, 95%} Table 7: Comparing MM-Dropout (Multimodal-Dropout) with TI-ASU Dropout. p and q are training and testing speech missing ratios, respectively. Dataset Dropout p Test Missing Ratio (q) 50 70 90 IEMOCAP MM 0 66.6 64.5 63.8 TI-ASU 50 66.4 64.6 64.1 95 63.9 63.2 63.3 Slurp MM 0 79.0 78.3 78.6 TI-ASU 50 78.4 78.1 78.2 95 79.4 78.8 79.0 SLUE MM 0 51.0 49.3 50.4 TI-ASU 50 51.4 50.6 51.3 95 51.4 51.1 51.1 9 Can LLM-Assisted Speech Generation Apply to TI-ASU? 9.1 Generation Approach In the previous section, we investigated speech imputation using speech generated from transcriptions. We argue that the diversity of this generation process is limited due to the constrained language content from the original speech. To augment existing speech generation, we propose to augment the transcriptions using the recently released LLM, LLaMa2-70B [30]. In particular, we adopt the following prompting message to augment the text transcriptions: Don\u2019t repeat my instructions. Rephrase the following sentence: TRANSCRIPTIONS Specifically, for the text-only training set DT train = {xT i , yi}, LLM model F, and prompt function prompt(\u00b7), we create augmented text set DTaug train = {xTaug i , yi}, where xTaug i = F(prompt(xT i )). Moreover, we perform a similar approach to generate speech set \u02c6 DTaug = {xTaug i , xS\u2032 i , yi}. Finally, we combine \u02c6 DTaug and \u02c6 DT to form the imputed multimodal set \u02c6 DT \u2032 with speech missing. During training, we randomly select a pair of text and speech from \u02c6 DT \u2032 as the data for training ASU. In this way, we enrich the generation diversity by employing multiple TTS models and language content enhancement using LLM. 9.2 Generation Examples Here, we provide generation examples from IEMOCAP, Slurp, and SLUE-Voxceleb datasets. Dataset IEMOCAP Raw: Out there somewhere there is this huge mass of silverfish headed this way Augmented: A large school of shimmering silver fish is making its way towards us. 9 Table 8: Comparisons of TI-ASU using original transcript and LLM-assisted transcript augmentation. Here, we present both TI-ASU-S and TI-ASU-MM with p = 95%. p indicates the speech missing ratio in training data. Speech TI-ASU-S TI-ASU-S (LLM) MM Zero-Filling TI-ASU-MM TI-ASU-MM (LLM) Imputation IEMOCAP 52.5 54.8 54.5 63.9 67.1 65.4 Slurp < 5% 56.0 50.4 78.7 80.4 76.2 SLUE 34.4 41.2 41.7 51.4 50.6 53.0 Dataset Slurp Raw: i did not want you to send that text yet wait until i say send Augmented: I wanted you to hold off on sending the text message until I gave you the green light, rather than sending it right away. SLUE-Voxceleb Raw: well it\u2019s hard to get the the right the proper finance money to Augmented: Securing adequate financial resources can be a challenge. 9.3 Performance of LLM-assisted TI-ASU We compare the LLM-assisted TI-ASU with TI-ASU that relies on the original transcript. We conduct the experiments using speech and multimodal training with p = 95%. Particularly, we perform the LLM augmentation on IEMOCAP, Slurp, and SLUE-Voxceleb datasets, as demonstrated in Table 8. The results indicate that LLM augmentation does not always improve the fine-tuning performances, yielding decreased performance compared to TI-ASU in IEMOCAP and Slurp datasets. We notice this performance drop exists in both speech and multimodal training. However, we observe that the LLM-assisted augmentation can benefit sentiment classification, improving performance on SLUE datasets in both speech and multimodal training. Overall, the findings suggest that LLM holds promise for augmenting speech samples. However, there is a need to enhance the quality of the rephrased transcription utilized for speech generation. This can be achieved through designing more effective prompt strategies or employing more advanced models such as ChatGPT. 10 Conclusion In this work, we proposed TI-ASU, a TTS imputation approach, for multi-modal ASU learning to address the challenges caused by missing speech modality. Our experiments demonstrate that TI-ASU provides robust multi-modal solutions against severe missing speech modality settings in training data or testing data. Crucially, increasing the diversity through multiple TTS speech generation enhances TI-ASU performance. Limitations and Future Works: While TI-ASU is effective in imputing speech data using raw transcript for ASU training, it encounters challenges in combining LLM-assisted augmentations, implying the potential issues in text generation. For example, we identify that LLM frequently extends the spoken language content, making it unlikely to be used in daily communication. Moreover, enriching speaker diversity in speech generation remains a challenge. Our future work plans to explore more advanced LLMs to improve the quality of the text augmentation. Moreover, we plan to include human inspection to evaluate generation quality." }, { "url": "http://arxiv.org/abs/2303.03369v2", "title": "Multimodal Prompting with Missing Modalities for Visual Recognition", "abstract": "In this paper, we tackle two challenges in multimodal learning for visual\nrecognition: 1) when missing-modality occurs either during training or testing\nin real-world situations; and 2) when the computation resources are not\navailable to finetune on heavy transformer models. To this end, we propose to\nutilize prompt learning and mitigate the above two challenges together.\nSpecifically, our modality-missing-aware prompts can be plugged into multimodal\ntransformers to handle general missing-modality cases, while only requiring\nless than 1% learnable parameters compared to training the entire model. We\nfurther explore the effect of different prompt configurations and analyze the\nrobustness to missing modality. Extensive experiments are conducted to show the\neffectiveness of our prompt learning framework that improves the performance\nunder various missing-modality cases, while alleviating the requirement of\nheavy model re-training. Code is available.", "authors": "Yi-Lun Lee, Yi-Hsuan Tsai, Wei-Chen Chiu, Chen-Yu Lee", "published": "2023-03-06", "updated": "2023-03-09", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2009.09761v3", "title": "DiffWave: A Versatile Diffusion Model for Audio Synthesis", "abstract": "In this work, we propose DiffWave, a versatile diffusion probabilistic model\nfor conditional and unconditional waveform generation. The model is\nnon-autoregressive, and converts the white noise signal into structured\nwaveform through a Markov chain with a constant number of steps at synthesis.\nIt is efficiently trained by optimizing a variant of variational bound on the\ndata likelihood. DiffWave produces high-fidelity audios in different waveform\ngeneration tasks, including neural vocoding conditioned on mel spectrogram,\nclass-conditional generation, and unconditional generation. We demonstrate that\nDiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44\nversus 4.43), while synthesizing orders of magnitude faster. In particular, it\nsignificantly outperforms autoregressive and GAN-based waveform models in the\nchallenging unconditional generation task in terms of audio quality and sample\ndiversity from various automatic and human evaluations.", "authors": "Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, Bryan Catanzaro", "published": "2020-09-21", "updated": "2021-03-30", "primary_cat": "eess.AS", "cats": [ "eess.AS", "cs.CL", "cs.LG", "cs.SD", "stat.ML" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2006.04558v8", "title": "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech", "abstract": "Non-autoregressive text to speech (TTS) models such as FastSpeech can\nsynthesize speech significantly faster than previous autoregressive models with\ncomparable quality. The training of FastSpeech model relies on an\nautoregressive teacher model for duration prediction (to provide more\ninformation as input) and knowledge distillation (to simplify the data\ndistribution in output), which can ease the one-to-many mapping problem (i.e.,\nmultiple speech variations correspond to the same text) in TTS. However,\nFastSpeech has several disadvantages: 1) the teacher-student distillation\npipeline is complicated and time-consuming, 2) the duration extracted from the\nteacher model is not accurate enough, and the target mel-spectrograms distilled\nfrom teacher model suffer from information loss due to data simplification,\nboth of which limit the voice quality. In this paper, we propose FastSpeech 2,\nwhich addresses the issues in FastSpeech and better solves the one-to-many\nmapping problem in TTS by 1) directly training the model with ground-truth\ntarget instead of the simplified output from teacher, and 2) introducing more\nvariation information of speech (e.g., pitch, energy and more accurate\nduration) as conditional inputs. Specifically, we extract duration, pitch and\nenergy from speech waveform and directly take them as conditional inputs in\ntraining and use predicted values in inference. We further design FastSpeech\n2s, which is the first attempt to directly generate speech waveform from text\nin parallel, enjoying the benefit of fully end-to-end inference. Experimental\nresults show that 1) FastSpeech 2 achieves a 3x training speed-up over\nFastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech\n2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even\nsurpass autoregressive models. Audio samples are available at\nhttps://speechresearch.github.io/fastspeech2/.", "authors": "Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu", "published": "2020-06-08", "updated": "2022-08-08", "primary_cat": "eess.AS", "cats": [ "eess.AS", "cs.CL", "cs.LG", "cs.SD" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/1904.12088v2", "title": "Neural source-filter waveform models for statistical parametric speech synthesis", "abstract": "Neural waveform models such as WaveNet have demonstrated better performance\nthan conventional vocoders for statistical parametric speech synthesis. As an\nautoregressive (AR) model, WaveNet is limited by a slow sequential waveform\ngeneration process. Some new models that use the inverse-autoregressive flow\n(IAF) can generate a whole waveform in a one-shot manner. However, these\nIAF-based models require sequential transformation during training, which\nseverely slows down the training speed. Other models such as Parallel WaveNet\nand ClariNet bring together the benefits of AR and IAF-based models and train\nan IAF model by transferring the knowledge from a pre-trained AR teacher to an\nIAF student without any sequential transformation. However, both models require\nadditional training criteria, and their implementation is prohibitively\ncomplicated.\n We propose a framework for neural source-filter (NSF) waveform modeling\nwithout AR nor IAF-based approaches. This framework requires only three\ncomponents for waveform generation: a source module that generates a sine-based\nsignal as excitation, a non-AR dilated-convolution-based filter module that\ntransforms the excitation into a waveform, and a conditional module that\npre-processes the acoustic features for the source and filer modules. This\nframework minimizes spectral-amplitude distances for model training, which can\nbe efficiently implemented by using short-time Fourier transform routines.\nUnder this framework, we designed three NSF models and compared them with\nWaveNet. It was demonstrated that the NSF models generated waveforms at least\n100 times faster than WaveNet, and the quality of the synthetic speech from the\nbest NSF model was better than or equally good as that from WaveNet.", "authors": "Xin Wang, Shinji Takaki, Junichi Yamagishi", "published": "2019-04-27", "updated": "2019-11-17", "primary_cat": "eess.AS", "cats": [ "eess.AS", "cs.SD", "stat.ML" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2010.05646v2", "title": "HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis", "abstract": "Several recent work on speech synthesis have employed generative adversarial\nnetworks (GANs) to produce raw waveforms. Although such methods improve the\nsampling efficiency and memory usage, their sample quality has not yet reached\nthat of autoregressive and flow-based generative models. In this work, we\npropose HiFi-GAN, which achieves both efficient and high-fidelity speech\nsynthesis. As speech audio consists of sinusoidal signals with various periods,\nwe demonstrate that modeling periodic patterns of an audio is crucial for\nenhancing sample quality. A subjective human evaluation (mean opinion score,\nMOS) of a single speaker dataset indicates that our proposed method\ndemonstrates similarity to human quality while generating 22.05 kHz\nhigh-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We\nfurther show the generality of HiFi-GAN to the mel-spectrogram inversion of\nunseen speakers and end-to-end speech synthesis. Finally, a small footprint\nversion of HiFi-GAN generates samples 13.4 times faster than real-time on CPU\nwith comparable quality to an autoregressive counterpart.", "authors": "Jungil Kong, Jaehyeon Kim, Jaekyoung Bae", "published": "2020-10-12", "updated": "2020-10-23", "primary_cat": "cs.SD", "cats": [ "cs.SD", "cs.LG", "eess.AS" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2303.03926v1", "title": "Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec Language Modeling", "abstract": "We propose a cross-lingual neural codec language model, VALL-E X, for\ncross-lingual speech synthesis. Specifically, we extend VALL-E and train a\nmulti-lingual conditional codec language model to predict the acoustic token\nsequences of the target language speech by using both the source language\nspeech and the target language text as prompts. VALL-E X inherits strong\nin-context learning capabilities and can be applied for zero-shot cross-lingual\ntext-to-speech synthesis and zero-shot speech-to-speech translation tasks.\nExperimental results show that it can generate high-quality speech in the\ntarget language via just one speech utterance in the source language as a\nprompt while preserving the unseen speaker's voice, emotion, and acoustic\nenvironment. Moreover, VALL-E X effectively alleviates the foreign accent\nproblems, which can be controlled by a language ID. Audio samples are available\nat \\url{https://aka.ms/vallex}.", "authors": "Ziqiang Zhang, Long Zhou, Chengyi Wang, Sanyuan Chen, Yu Wu, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, Furu Wei", "published": "2023-03-07", "updated": "2023-03-07", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.SD", "eess.AS" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2103.05677v1", "title": "SMIL: Multimodal Learning with Severely Missing Modality", "abstract": "A common assumption in multimodal learning is the completeness of training\ndata, i.e., full modalities are available in all training examples. Although\nthere exists research endeavor in developing novel methods to tackle the\nincompleteness of testing data, e.g., modalities are partially missing in\ntesting examples, few of them can handle incomplete training modalities. The\nproblem becomes even more challenging if considering the case of severely\nmissing, e.g., 90% training examples may have incomplete modalities. For the\nfirst time in the literature, this paper formally studies multimodal learning\nwith missing modality in terms of flexibility (missing modalities in training,\ntesting, or both) and efficiency (most training data have incomplete modality).\nTechnically, we propose a new method named SMIL that leverages Bayesian\nmeta-learning in uniformly achieving both objectives. To validate our idea, we\nconduct a series of experiments on three popular benchmarks: MM-IMDb, CMU-MOSI,\nand avMNIST. The results prove the state-of-the-art performance of SMIL over\nexisting methods and generative baselines including autoencoders and generative\nadversarial networks. Our code is available at\nhttps://github.com/mengmenm/SMIL.", "authors": "Mengmeng Ma, Jian Ren, Long Zhao, Sergey Tulyakov, Cathy Wu, Xi Peng", "published": "2021-03-09", "updated": "2021-03-09", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/1812.07809v2", "title": "Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities", "abstract": "Multimodal sentiment analysis is a core research area that studies speaker\nsentiment expressed from the language, visual, and acoustic modalities. The\ncentral challenge in multimodal learning involves inferring joint\nrepresentations that can process and relate information from these modalities.\nHowever, existing work learns joint representations by requiring all modalities\nas input and as a result, the learned representations may be sensitive to noisy\nor missing modalities at test time. With the recent success of sequence to\nsequence (Seq2Seq) models in machine translation, there is an opportunity to\nexplore new ways of learning joint representations that may not require all\ninput modalities at test time. In this paper, we propose a method to learn\nrobust joint representations by translating between modalities. Our method is\nbased on the key insight that translation from a source to a target modality\nprovides a method of learning joint representations using only the source\nmodality as input. We augment modality translations with a cycle consistency\nloss to ensure that our joint representations retain maximal information from\nall modalities. Once our translation model is trained with paired multimodal\ndata, we only need data from the source modality at test time for final\nsentiment prediction. This ensures that our model remains robust from\nperturbations or missing information in the other modalities. We train our\nmodel with a coupled translation-prediction objective and it achieves new\nstate-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI,\nICT-MMMO, and YouTube. Additional experiments show that our model learns\nincreasingly discriminative joint representations with more input modalities\nwhile maintaining robustness to missing or perturbed modalities.", "authors": "Hai Pham, Paul Pu Liang, Thomas Manzini, Louis-Philippe Morency, Barnabas Poczos", "published": "2018-12-19", "updated": "2020-02-28", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.CL", "cs.CV", "cs.HC", "stat.ML" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2110.07205v3", "title": "SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing", "abstract": "Motivated by the success of T5 (Text-To-Text Transfer Transformer) in\npre-trained natural language processing models, we propose a unified-modal\nSpeechT5 framework that explores the encoder-decoder pre-training for\nself-supervised speech/text representation learning. The SpeechT5 framework\nconsists of a shared encoder-decoder network and six modal-specific\n(speech/text) pre/post-nets. After preprocessing the input speech/text through\nthe pre-nets, the shared encoder-decoder network models the\nsequence-to-sequence transformation, and then the post-nets generate the output\nin the speech/text modality based on the output of the decoder. Leveraging\nlarge-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a\nunified-modal representation, hoping to improve the modeling capability for\nboth speech and text. To align the textual and speech information into this\nunified semantic space, we propose a cross-modal vector quantization approach\nthat randomly mixes up speech/text states with latent units as the interface\nbetween encoder and decoder. Extensive evaluations show the superiority of the\nproposed SpeechT5 framework on a wide variety of spoken language processing\ntasks, including automatic speech recognition, speech synthesis, speech\ntranslation, voice conversion, speech enhancement, and speaker identification.\nWe release our code and model at https://github.com/microsoft/SpeechT5.", "authors": "Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei", "published": "2021-10-14", "updated": "2022-05-24", "primary_cat": "eess.AS", "cats": [ "eess.AS", "cs.CL", "cs.LG", "cs.SD" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/1703.10135v2", "title": "Tacotron: Towards End-to-End Speech Synthesis", "abstract": "A text-to-speech synthesis system typically consists of multiple stages, such\nas a text analysis frontend, an acoustic model and an audio synthesis module.\nBuilding these components often requires extensive domain expertise and may\ncontain brittle design choices. In this paper, we present Tacotron, an\nend-to-end generative text-to-speech model that synthesizes speech directly\nfrom characters. Given pairs, the model can be trained completely\nfrom scratch with random initialization. We present several key techniques to\nmake the sequence-to-sequence framework perform well for this challenging task.\nTacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,\noutperforming a production parametric system in terms of naturalness. In\naddition, since Tacotron generates speech at the frame level, it's\nsubstantially faster than sample-level autoregressive methods.", "authors": "Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous", "published": "2017-03-29", "updated": "2017-04-06", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.LG", "cs.SD" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/1905.09263v5", "title": "FastSpeech: Fast, Robust and Controllable Text to Speech", "abstract": "Neural network based end-to-end text to speech (TTS) has significantly\nimproved the quality of synthesized speech. Prominent methods (e.g., Tacotron\n2) usually first generate mel-spectrogram from text, and then synthesize speech\nfrom the mel-spectrogram using vocoder such as WaveNet. Compared with\ntraditional concatenative and statistical parametric approaches, neural network\nbased end-to-end models suffer from slow inference speed, and the synthesized\nspeech is usually not robust (i.e., some words are skipped or repeated) and\nlack of controllability (voice speed or prosody control). In this work, we\npropose a novel feed-forward network based on Transformer to generate\nmel-spectrogram in parallel for TTS. Specifically, we extract attention\nalignments from an encoder-decoder based teacher model for phoneme duration\nprediction, which is used by a length regulator to expand the source phoneme\nsequence to match the length of the target mel-spectrogram sequence for\nparallel mel-spectrogram generation. Experiments on the LJSpeech dataset show\nthat our parallel model matches autoregressive models in terms of speech\nquality, nearly eliminates the problem of word skipping and repeating in\nparticularly hard cases, and can adjust voice speed smoothly. Most importantly,\ncompared with autoregressive Transformer TTS, our model speeds up\nmel-spectrogram generation by 270x and the end-to-end speech synthesis by 38x.\nTherefore, we call our model FastSpeech.", "authors": "Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, Tie-Yan Liu", "published": "2019-05-22", "updated": "2019-11-20", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.LG", "cs.SD", "eess.AS" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/1712.05884v2", "title": "Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions", "abstract": "This paper describes Tacotron 2, a neural network architecture for speech\nsynthesis directly from text. The system is composed of a recurrent\nsequence-to-sequence feature prediction network that maps character embeddings\nto mel-scale spectrograms, followed by a modified WaveNet model acting as a\nvocoder to synthesize timedomain waveforms from those spectrograms. Our model\nachieves a mean opinion score (MOS) of $4.53$ comparable to a MOS of $4.58$ for\nprofessionally recorded speech. To validate our design choices, we present\nablation studies of key components of our system and evaluate the impact of\nusing mel spectrograms as the input to WaveNet instead of linguistic, duration,\nand $F_0$ features. We further demonstrate that using a compact acoustic\nintermediate representation enables significant simplification of the WaveNet\narchitecture.", "authors": "Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, Yonghui Wu", "published": "2017-12-16", "updated": "2018-02-16", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2301.02111v1", "title": "Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers", "abstract": "We introduce a language modeling approach for text to speech synthesis (TTS).\nSpecifically, we train a neural codec language model (called Vall-E) using\ndiscrete codes derived from an off-the-shelf neural audio codec model, and\nregard TTS as a conditional language modeling task rather than continuous\nsignal regression as in previous work. During the pre-training stage, we scale\nup the TTS training data to 60K hours of English speech which is hundreds of\ntimes larger than existing systems. Vall-E emerges in-context learning\ncapabilities and can be used to synthesize high-quality personalized speech\nwith only a 3-second enrolled recording of an unseen speaker as an acoustic\nprompt. Experiment results show that Vall-E significantly outperforms the\nstate-of-the-art zero-shot TTS system in terms of speech naturalness and\nspeaker similarity. In addition, we find Vall-E could preserve the speaker's\nemotion and acoustic environment of the acoustic prompt in synthesis. See\nhttps://aka.ms/valle for demos of our work.", "authors": "Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, Furu Wei", "published": "2023-01-05", "updated": "2023-01-05", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.SD", "eess.AS" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2204.05454v1", "title": "Are Multimodal Transformers Robust to Missing Modality?", "abstract": "Multimodal data collected from the real world are often imperfect due to\nmissing modalities. Therefore multimodal models that are robust against\nmodal-incomplete data are highly preferred. Recently, Transformer models have\nshown great success in processing multimodal data. However, existing work has\nbeen limited to either architecture designs or pre-training strategies; whether\nTransformer models are naturally robust against missing-modal data has rarely\nbeen investigated. In this paper, we present the first-of-its-kind work to\ncomprehensively investigate the behavior of Transformers in the presence of\nmodal-incomplete data. Unsurprising, we find Transformer models are sensitive\nto missing modalities while different modal fusion strategies will\nsignificantly affect the robustness. What surprised us is that the optimal\nfusion strategy is dataset dependent even for the same Transformer model; there\ndoes not exist a universal strategy that works in general cases. Based on these\nfindings, we propose a principle method to improve the robustness of\nTransformer models by automatically searching for an optimal fusion strategy\nregarding input data. Experimental validations on three benchmarks support the\nsuperior performance of the proposed method.", "authors": "Mengmeng Ma, Jian Ren, Long Zhao, Davide Testuggine, Xi Peng", "published": "2022-04-12", "updated": "2022-04-12", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/1609.03499v2", "title": "WaveNet: A Generative Model for Raw Audio", "abstract": "This paper introduces WaveNet, a deep neural network for generating raw audio\nwaveforms. The model is fully probabilistic and autoregressive, with the\npredictive distribution for each audio sample conditioned on all previous ones;\nnonetheless we show that it can be efficiently trained on data with tens of\nthousands of samples per second of audio. When applied to text-to-speech, it\nyields state-of-the-art performance, with human listeners rating it as\nsignificantly more natural sounding than the best parametric and concatenative\nsystems for both English and Mandarin. A single WaveNet can capture the\ncharacteristics of many different speakers with equal fidelity, and can switch\nbetween them by conditioning on the speaker identity. When trained to model\nmusic, we find that it generates novel and often highly realistic musical\nfragments. We also show that it can be employed as a discriminative model,\nreturning promising results for phoneme recognition.", "authors": "Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu", "published": "2016-09-12", "updated": "2016-09-19", "primary_cat": "cs.SD", "cats": [ "cs.SD", "cs.LG" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2204.13707v1", "title": "Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities", "abstract": "Multimodal sentiment analysis has been studied under the assumption that all\nmodalities are available. However, such a strong assumption does not always\nhold in practice, and most of multimodal fusion models may fail when partial\nmodalities are missing. Several works have addressed the missing modality\nproblem; but most of them only considered the single modality missing case, and\nignored the practically more general cases of multiple modalities missing. To\nthis end, in this paper, we propose a Tag-Assisted Transformer Encoder (TATE)\nnetwork to handle the problem of missing uncertain modalities. Specifically, we\ndesign a tag encoding module to cover both the single modality and multiple\nmodalities missing cases, so as to guide the network's attention to those\nmissing modalities. Besides, we adopt a new space projection pattern to align\ncommon vectors. Then, a Transformer encoder-decoder network is utilized to\nlearn the missing modality features. At last, the outputs of the Transformer\nencoder are used for the final sentiment classification. Extensive experiments\nare conducted on CMU-MOSI and IEMOCAP datasets, showing that our method can\nachieve significant improvements compared with several baselines.", "authors": "Jiandian Zeng, Tianyi Liu, Jiantao Zhou", "published": "2022-04-28", "updated": "2022-04-28", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.AI" ], "label": "Related Work" }, { "url": "http://arxiv.org/abs/2309.08836v2", "title": "Bias and Fairness in Chatbots: An Overview", "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", "published": "2023-09-16", "updated": "2023-12-10", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.CY" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.04814v2", "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", "published": "2024-03-07", "updated": "2024-04-10", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.LG", "cs.SE" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2404.08656v1", "title": "Linear Cross-document Event Coreference Resolution with X-AMR", "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", "published": "2024-03-25", "updated": "2024-03-25", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.05694v1", "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", "published": "2023-10-09", "updated": "2023-10-09", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2401.00588v1", "title": "Fairness in Serving Large Language Models", "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", "published": "2023-12-31", "updated": "2023-12-31", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.LG", "cs.PF" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2309.09397v1", "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", "authors": "Stephen Fitz", "published": "2023-09-17", "updated": "2023-09-17", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.CY", "cs.LG", "cs.NE" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2309.03852v2", "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", "published": "2023-09-07", "updated": "2023-09-17", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.15007v1", "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", "published": "2023-10-23", "updated": "2023-10-23", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.CR", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.08780v1", "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", "published": "2023-10-13", "updated": "2023-10-13", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.18333v3", "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", "published": "2023-10-20", "updated": "2023-12-15", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2312.07420v1", "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", "published": "2023-12-12", "updated": "2023-12-12", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.CY" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2404.03192v1", "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", "published": "2024-04-04", "updated": "2024-04-04", "primary_cat": "cs.IR", "cats": [ "cs.IR", "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2308.10149v2", "title": "A Survey on Fairness in Large Language Models", "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", "published": "2023-08-20", "updated": "2024-02-21", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2309.14345v2", "title": "Bias Testing and Mitigation in LLM-based Code Generation", "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", "published": "2023-09-03", "updated": "2024-01-09", "primary_cat": "cs.SE", "cats": [ "cs.SE", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.00884v2", "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", "published": "2024-03-01", "updated": "2024-03-05", "primary_cat": "cs.DB", "cats": [ "cs.DB", "cs.AI", "cs.IR" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.02049v1", "title": "Post Turing: Mapping the landscape of LLM Evaluation", "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", "published": "2023-11-03", "updated": "2023-11-03", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "68T50", "I.2.7" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2404.12736v1", "title": "Large Language Model Supply Chain: A Research Agenda", "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", "published": "2024-04-19", "updated": "2024-04-19", "primary_cat": "cs.SE", "cats": [ "cs.SE" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2405.01769v1", "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", "published": "2024-05-02", "updated": "2024-05-02", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2312.14804v1", "title": "Use large language models to promote equity", "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", "published": "2023-12-22", "updated": "2023-12-22", "primary_cat": "cs.CY", "cats": [ "cs.CY" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2305.19118v1", "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", "published": "2023-05-30", "updated": "2023-05-30", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2404.06003v1", "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", "published": "2024-04-09", "updated": "2024-04-09", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.00811v1", "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", "published": "2024-02-25", "updated": "2024-02-25", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.04489v1", "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", "published": "2024-02-07", "updated": "2024-02-07", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.CR", "cs.CY", "stat.ME" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2309.11653v2", "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", "published": "2023-09-20", "updated": "2024-04-02", "primary_cat": "cs.HC", "cats": [ "cs.HC", "cs.AI", "cs.CR" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.01964v1", "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", "published": "2023-11-03", "updated": "2023-11-03", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2401.01262v2", "title": "Fairness Certification for Natural Language Processing and Large Language Models", "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", "authors": "Vincent Freiberger, Erik Buchmann", "published": "2024-01-02", "updated": "2024-01-03", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.CY", "cs.LG", "68T50", "I.2.7" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.10567v3", "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", "published": "2024-02-16", "updated": "2024-02-21", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.12150v1", "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", "published": "2024-02-19", "updated": "2024-02-19", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "I.2; J.4" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2308.05374v2", "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", "published": "2023-08-10", "updated": "2024-03-21", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.11406v2", "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", "published": "2024-02-18", "updated": "2024-02-26", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.06852v2", "title": "ChemLLM: A Chemical Large Language Model", "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", "published": "2024-02-10", "updated": "2024-04-25", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2206.13757v1", "title": "Flexible text generation for counterfactual fairness probing", "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", "published": "2022-06-28", "updated": "2022-06-28", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.CY" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.18130v2", "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", "published": "2023-10-27", "updated": "2023-11-07", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.HC" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.13840v1", "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", "published": "2024-03-15", "updated": "2024-03-15", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.SI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.18502v1", "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", "published": "2024-02-28", "updated": "2024-02-28", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.15451v1", "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", "published": "2024-03-18", "updated": "2024-03-18", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2308.05345v3", "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", "published": "2023-08-10", "updated": "2023-11-11", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.AR" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2303.01248v3", "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", "published": "2023-03-01", "updated": "2023-10-13", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.13343v1", "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", "published": "2023-10-20", "updated": "2023-10-20", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.02839v1", "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", "published": "2024-03-05", "updated": "2024-03-05", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2404.18276v1", "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", "published": "2024-04-28", "updated": "2024-04-28", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "D.1; I.2" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.15215v1", "title": "Item-side Fairness of Large Language Model-based Recommendation System", "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", "published": "2024-02-23", "updated": "2024-02-23", "primary_cat": "cs.IR", "cats": [ "cs.IR" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2312.14769v3", "title": "Large Language Model (LLM) Bias Index -- LLMBI", "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", "published": "2023-12-22", "updated": "2023-12-29", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.CY", "cs.LG", "I.2.7" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.06500v1", "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", "published": "2023-10-10", "updated": "2023-10-10", "primary_cat": "cs.AI", "cats": [ "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2308.10397v2", "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", "published": "2023-08-21", "updated": "2023-10-27", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.05668v1", "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", "authors": "Yashar Deldjoo, Tommaso di Noia", "published": "2024-03-08", "updated": "2024-03-08", "primary_cat": "cs.IR", "cats": [ "cs.IR" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2307.15997v1", "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", "published": "2023-07-29", "updated": "2023-07-29", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.09219v5", "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", "published": "2023-10-13", "updated": "2023-12-01", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2305.01937v1", "title": "Can Large Language Models Be an Alternative to Human Evaluations?", "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", "authors": "Cheng-Han Chiang, Hung-yi Lee", "published": "2023-05-03", "updated": "2023-05-03", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.HC" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.04892v2", "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", "published": "2023-11-08", "updated": "2024-01-27", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.03033v1", "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", "published": "2023-11-06", "updated": "2023-11-06", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2304.03728v1", "title": "Interpretable Unified Language Checking", "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", "published": "2023-04-07", "updated": "2023-04-07", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.17553v1", "title": "RuBia: A Russian Language Bias Detection Dataset", "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", "published": "2024-03-26", "updated": "2024-03-26", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2401.04057v1", "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", "published": "2024-01-08", "updated": "2024-01-08", "primary_cat": "cs.IR", "cats": [ "cs.IR", "cs.AI", "cs.SE" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.14473v1", "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", "authors": "Joschka Haltaufderheide, Robert Ranisch", "published": "2024-03-21", "updated": "2024-03-21", "primary_cat": "cs.CY", "cats": [ "cs.CY" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.11764v1", "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", "published": "2024-02-19", "updated": "2024-02-19", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.CY", "68T50", "I.2.7; K.4.1" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2405.02219v1", "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", "authors": "Yashar Deldjoo", "published": "2024-05-03", "updated": "2024-05-03", "primary_cat": "cs.IR", "cats": [ "cs.IR" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2404.01349v1", "title": "Fairness in Large Language Models: A Taxonomic Survey", "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", "published": "2024-03-31", "updated": "2024-03-31", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.17916v2", "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", "published": "2024-02-27", "updated": "2024-03-30", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.02294v1", "title": "LLMs grasp morality in concept", "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", "authors": "Mark Pock, Andre Ye, Jared Moore", "published": "2023-11-04", "updated": "2023-11-04", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.CY" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2312.06056v1", "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", "published": "2023-12-11", "updated": "2023-12-11", "primary_cat": "cs.SE", "cats": [ "cs.SE", "cs.AI", "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.00306v1", "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", "published": "2023-11-01", "updated": "2023-11-01", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2312.15198v2", "title": "Do LLM Agents Exhibit Social Behavior?", "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", "authors": "Yan Leng, Yuan Yuan", "published": "2023-12-23", "updated": "2024-02-22", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.SI", "econ.GN", "q-fin.EC" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2307.03838v2", "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", "published": "2023-07-07", "updated": "2023-10-24", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.06899v4", "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", "published": "2023-11-12", "updated": "2024-04-15", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2403.15491v1", "title": "Open Source Conversational LLMs do not know most Spanish words", "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", "published": "2024-03-21", "updated": "2024-03-21", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.09447v2", "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", "published": "2023-11-15", "updated": "2024-04-02", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2305.03514v3", "title": "Can Large Language Models Transform Computational Social Science?", "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", "published": "2023-04-12", "updated": "2024-02-26", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2401.08495v2", "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", "published": "2024-01-16", "updated": "2024-04-26", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.08189v1", "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", "authors": "Karthik Sreedhar, Lydia Chilton", "published": "2024-02-13", "updated": "2024-02-13", "primary_cat": "cs.HC", "cats": [ "cs.HC" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2305.12090v1", "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", "published": "2023-05-20", "updated": "2023-05-20", "primary_cat": "cs.IR", "cats": [ "cs.IR", "cs.AI", "cs.CL", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2312.15478v1", "title": "A Group Fairness Lens for Large Language Models", "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", "published": "2023-12-24", "updated": "2023-12-24", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.14208v2", "title": "Content Conditional Debiasing for Fair Text Embedding", "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", "published": "2024-02-22", "updated": "2024-02-23", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.CY", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2401.15585v1", "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", "published": "2024-01-28", "updated": "2024-01-28", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2312.15398v1", "title": "Fairness-Aware Structured Pruning in Transformers", "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", "published": "2023-12-24", "updated": "2023-12-24", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.CY", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2310.16343v2", "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", "authors": "Xiang Chen, Xiaojun Wan", "published": "2023-10-25", "updated": "2024-03-21", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2305.13862v2", "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", "published": "2023-05-23", "updated": "2023-08-29", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2402.19465v1", "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", "published": "2024-02-29", "updated": "2024-02-29", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2305.11595v3", "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", "published": "2023-05-19", "updated": "2023-10-18", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.08472v1", "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", "published": "2023-11-14", "updated": "2023-11-14", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2311.18140v1", "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", "published": "2023-11-29", "updated": "2023-11-29", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2404.10199v3", "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", "published": "2024-04-16", "updated": "2024-04-26", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2308.11483v1", "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", "authors": "Pouya Pezeshkpour, Estevam Hruschka", "published": "2023-08-22", "updated": "2023-08-22", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI", "cs.LG" ], "category": "LLM Fairness" }, { "url": "http://arxiv.org/abs/2305.07609v3", "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", "published": "2023-05-12", "updated": "2023-10-17", "primary_cat": "cs.IR", "cats": [ "cs.IR", "cs.CL", "cs.CY" ], "category": "LLM Fairness" } ]