AcademicEval / related_34K /test_related_short_2404.16966v1.json
username
syn
b9dcaaf
raw
history blame
218 kB
[
{
"url": "http://arxiv.org/abs/2404.16966v1",
"title": "Examining the robustness of LLM evaluation to the distributional assumptions of benchmarks",
"abstract": "Benchmarks have emerged as the central approach for evaluating Large Language\nModels (LLMs). The research community often relies on a model's average\nperformance across the test prompts of a benchmark to evaluate the model's\nperformance. This is consistent with the assumption that the test prompts\nwithin a benchmark represent a random sample from a real-world distribution of\ninterest. We note that this is generally not the case; instead, we hold that\nthe distribution of interest varies according to the specific use case. We find\nthat (1) the correlation in model performance across test prompts is\nnon-random, (2) accounting for correlations across test prompts can change\nmodel rankings on major benchmarks, (3) explanatory factors for these\ncorrelations include semantic similarity and common LLM failure points.",
"authors": "Melissa Ailem, Katerina Marazopoulou, Charlotte Siska, James Bono",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Evaluating the performance of LLMs has become a critical area of research, drawing significant attention in recent years. Comprehensive surveys of LLM evaluation can be found in Chang et al. (2023); Guo et al. (2023), and Liang et al. (2022). When assessing the quality of LLMs, the robustness aspect is becoming of increasing importance (Wang et al., 2022; Goel et al., 2021). Robustness investigates the stability of a model when confronted with unforeseen prompts. Robustness research can be divided into three main lines of work (Li et al., 2023): (i) robustness under distribution shift (Wang et al., 2021; Yang et al., 2023), (ii) robustness to adversarial input (Zhu et al., 2023; Wang et al., 2023a), and (iii) robustness to dataset bias (Gururangan et al., 2018; Le Bras et al., 2020; Niven and Kao, 2019). Our work falls into the latter category. Reducing bias on benchmarks is a long-standing area of research spanning many diverse fields. Applications range from weighing survey responses to match a target population (DeBell, 2018), to accounting for language biases in visual questionanswering (Goyal et al., 2017). In the context of NLI, researchers have looked into improving the quality of prompts in order to mitigate certain types of biases. Work in this area has focused on determining the quality of prompts by generating optimal prompts (Pryzant et al., 2023; Deng et al., 2022) or by clustering prompts based on semantic similarity (Kuhn et al., 2023). Additionally, researchers have investigated data leakage between benchmarks and LLM training data (Zhou et al., 2023; Oren et al., 2023). Limited research has been conducted to study inherent biases in LLM benchmarks. Among existing works, Gururangan et al. (2018) and Niven and Kao (2019) have shown that models leverage spurious statistical relationships in the benchmark datasets and, thus, their performance on the benchmarks is overestimated. In the same spirit, Le Bras et al. (2020) propose to investigate AFLITE (Sakaguchi et al., 2023), an iterative approach to filter datasets by removing biased data points to mitigate overestimation of language models\u2019 performance. More recently, Alzahrani et al. (2024) show that performance of LLMs is highly sensitive to minor changes in benchmarks with multiple-choice questions. Our work is orthogonal yet complementary to previous work. In particular, we propose a new method to identify biases in a benchmark by looking at the performance of multiple recent LLMs on that benchmark. We show that similarity in performance correlates with similarity in prompts. To the best of our knowledge, our work is the first approaching benchmark biases by analyzing and leveraging the performance of a collection of models on a set of major benchmarks; as well as investigating the impact of inherent distributional biases in benchmarks used on LLM comparative studies.",
"pre_questions": [],
"main_content": "Introduction Since the introduction of the Transformer architecture (Vaswani et al., 2017), Large Language Models (LLMs) have progressed into sophisticated systems with an outstanding ability to comprehend and generate text that mimic human language. Notable models in this domain include ChatGPT1, utilizing the GPT-3.5-TURBO or GPT-4 architectures2, LLaMA (Touvron et al., 2023), ChatGLM (Zeng et al., 2023), Alpaca (Taori et al., 2023), and Falcon (Penedo et al., 2023). Due to their effectiveness, LLMs are becoming very popular in both academia and industry, making their evaluation crucial. However, this effectiveness comes at the cost of increased complexity, which makes their evaluation very challenging. Although prior research has introduced benchmarks for different tasks along with evaluation measures, these \u2020These authors contributed equally to this work. 1New chat: https://chat.openai.com/ 2Models OpenAI API: https://platform.openai. com/docs/models/ assessments often overlook potential biases. When a benchmark includes multiple prompts with similar characteristics, it can increase or decrease the average performance of a model, so model comparisons can become brittle with respect to benchmark composition. In this work, we show that the inherent connections between the prompts in current benchmarks impact the models\u2019 performance and their relative rankings. The standard approach for evaluation on a benchmark is to (i) obtain model responses for each prompt in the benchmark, (ii) compute the performance metrics for each response, (iii) aggregate (usually average) the performance metrics to obtain a single performance metric over the benchmark, and (iv) compare models by comparing their aggregate performance. When aggregating performance metrics in step iii above, each prompt is generally weighted equally (Yang and Menczer, 2023; Pe\u00f1a et al., 2023). However, using equal weights reflects the assumption that prompts in the benchmark are \u201cequal\u201d, in the sense that prompts are representative samples of a target distribution of interest. In the case of LLMs, the notion of a target distribution (i.e., the distribution of all possible prompts for a given use case) is usually not well-defined. For example, different Natural Language Inference (NLI) applications may have very different target distributions, and we should not expect a single benchmark to capture every one. Therefore, one must ask: What distribution do the prompts in the benchmark represent? Would considering different distributions fundamentally change model comparisons? In this work, we present a novel approach to assess the robustness and adequacy of benchmarks used in evaluating LLMs, by analyzing the performance of multiple LLMs on a set of four major benchmarks. Our key contributions are outlined below: 1. For each considered benchmark, we observe arXiv:2404.16966v1 [cs.CL] 25 Apr 2024 that the correlation of model performance across prompts is significant (p-value < 0.05). This demonstrates the existence of relationships between prompts within the investigated benchmarks. 2. We explore the robustness of model comparisons to different distributional assumptions based on correlation structure, and we observe shifts in performance as large as 10% and rank changes as large as 5 (out of 14 models). 3. We provide a characterization of performance over the distribution of all possible prompt weights. This constitutes a robustness check that can be incorporated in comparative studies. 4. We show that model performance similarity across prompts can be explained by semantic similarity, but it is most likely derived by common failure points of the LLM. In this section, we outline the problem setup and introduce the notation and expressions that will be employed throughout the paper. Second, we present the approach to evaluate whether relationships between prompts (based on models\u2019 performance) are statistically non-random. Furthermore, we describe our method for analyzing how sensitive model comparisons are with respect to different distributional assumptions of the benchmark. Finally, we present our proposed methodology for exploring the origins of relationships between prompt performance vectors. 3.1 Problem setup Consider a benchmark containing n prompts {p1, . . . , pn}, and a set of k LLMs {m1, . . . , mk} being evaluated. We define the performance matrix Q as an n \u00d7 k matrix, where every cell Q[i, j] represents the performance of model mj on prompt pi. We refer to the i-th row of that matrix, qi, as a performance vector for prompt pi. To measure how similar two prompts are with respect to model performance, we compute the similarity between their performance vectors sperf (pi, pj) := s(qi, qj), where s(\u00b7, \u00b7) is a similarity function. Here, we consider cosine, Jaccard, and Hamming similarity. Given a performance matrix Q and a similarity function s, we compute a n \u00d7 n similarity matrix Ts(Q), where every cell T[i, j] is the performance similarity for prompts pi, pj: T[i, j] = sperf (pi, pj). Semantic meaning from text is commonly understood through the use of embeddings. An embedding of a prompt is a numerical vector that contains the learned representations of semantic meaning. Measuring semantic similarity between two prompts is achieved by measuring the distance between their embeddings. In this paper, we use ada-2 embeddings from OpenAI3. The ada-2 embeddings are widely used and have been proven effective in various NLP tasks. These embeddings have shown strong performance in assessing semantic similarity between texts (Aperdannier et al., 2024; Kamalloo et al., 2023; Freestone and Santu, 2024). For a set of prompts {p1, . . . , pn}, we compute a matrix of embeddings E = {e1, . . . , en}. E is a n \u00d7 s matrix, where s is the size of the embedding vectors. To measure semantic similarity between pairs of prompts, we compute similarity metrics between the corresponding rows: ssem(pi, pj) = s(ei, ej). 3.2 Determining if performance vectors are correlated Given a benchmark, we assess whether the observed similarity among performance vectors is significant. If the observed similarity is significantly high, this implies the existence of specific connections between prompts. These connections lead to similar model behavior when responding to these prompts. To test this hypothesis, we perform permutation tests. We generate permutations of the performance matrix Q by randomly shuffling the cells of each column. In this way, we permute the values of the model responses across prompts, while holding constant the overall performance of each model (i.e., the column averages of Q). We then compute a similarity matrix Ts(Q) for the observed performance matrix Q, as well as for each permutation Q\u2032 of the performance matrix: [Ts(Q\u2032 1), Ts(Q\u2032 2), . . .]. 3https://openai.com/blog/new-and-improved-embedding-model We compare the distribution of values from Ts(Q) with the distribution of values from the permuted tables [Ts(Q\u2032 1), Ts(Q\u2032 2), . . .]. We conduct a permutation test to compare the average, 75th, and 95th percentiles of these distributions. The p-value of the permutation test is calculated as the proportion of permuted tables for which the statistic is greater than the one obtained with the observed table. Additionally, we use the Kolmogorov-Smirnov (KS) test to compare the entire distribution of values between observed and permuted similarity matrices. To further support our findings, we cluster the observed and permuted performance vectors. If there are non-random correlations between performance vectors, we would expect the clustering of the observed vectors to have higher clustering quality metrics, such as silhouette score. 3.3 Effect of non-uniform weights in aggregate performance metrics So far, we have focused on aggregate performance measures that treat prompts as if they are independent and identically distributed (i.i.d.) samples from some real-world distribution of interest\u2014i.e., each prompt is given equal weight in calculating aggregate performance metrics. In this section, we examine the implications of relaxing this assumption for ranking models based on their performance. Generally, there is no universally correct distribution of interest\u2014it depends on each user\u2019s application. Here, we look into three different ways of capturing distributional assumptions (i.e., of defining weights) for a given benchmark. Cluster-based: We leverage the clustering of performance vectors described above. We consider the following variants for evaluating performance: 1. Only include prompts that are cluster representatives (i.e., the medoids of the clusters). This effectively decreases the size of the benchmark. 2. Include all prompts, but weigh them based on their distance from their cluster representative. We employ two types of weights: (i) Distance-based: The further away a prompt is from the cluster representative, the larger its weight. This setting gives more emphasis on diversity of the benchmark. More formally, let pi be a prompt in cluster Cj, pr j be the representative prompt of cluster Cj, and d(\u00b7, \u00b7) the distance function between two prompts. The weight w for pi is: w(pi) = d(pi, pr j) P pk\u2208Cj \u0010 d(pk, pr j) \u0011 |Cj| P i |Ci| The first factor is the within-cluster weight of the prompt (normalized within cluster). The second factor weighs all prompts of a given cluster proportionally to the cluster\u2019s size. (ii) Inverse-distance weights: The closer a prompt is to the cluster representative, the larger its weight. This setting effectively smooths out the hard clustering we produced: all data points contribute to the performance, not just the cluster representatives. The weight w for pi is computed as: w(pi) = d\u22121(pi, pr j) P pk\u2208Cj \u0010 d\u22121(pk, pr j) \u0011 |Cj| P i |Ci| Increasing benchmark size We start with a random prompt and iteratively add new prompts into the benchmark. To select the next prompt to add, we use two methods: (i) most informative: select the prompt with the largest cosine distance (lowest cosine similarity) from the previously selected ones in order to obtain an informative test set with a reduced semantic similarity between prompts, (ii) random: select a random prompt. Random distributions of weights We weigh each prompt and compute weighted performance, with weights drawn uniformly at random. To achieve that, we sample uniformly at random from the unit simplex using the sampling technique described in Smith and Tromble (2004). This approach aims to provide a characterization over all possible weight configurations. 3.4 Comparing performance vectors with semantic embeddings of prompts Having established that model performance is similar across prompts, we next investigate where this similarity stems from. Our hypothesis is that for a pair of prompts, similar model performance can occur if the prompts are semantically similar. We use linear regression to determine if there exists a significant relationship between semantic similarity and model performance similarity: sperf (pi, pj) = ssem(pi, pj)\u03b2 + \u03f5 where \u03b2 is the coefficient of how much semantic similarity contributes to the model and \u03f5 is error. Using all prompt pairs raises concerns about the data being i.i.d., given that each observation is a pairwise comparison and each member of a pair appears in many observations. To avoid that, we estimate one model for each prompt, including all the pairwise observations of which that prompt is a part. We collect p-values for the coefficients across all models and perform multiple hypotheses adjustment to generate False Discovery Rate (FDR) values. We repeat the same approach for 1000 permutations as described in Section 3.2 for both pairwise performance and semantic similarity vectors. Finally, we compare the distribution of coefficients and FDRs between original data and permutations using the KS test. 4 Experimental setup In this section, we describe the setting of our experiments. Specifically, we provide details on the benchmarks and evaluation metrics we use, the LLMs we consider, and how we evaluate performance of the LLMs on the benchmarks. 4.1 Benchmarks We investigate four major benchmarks that are designed for different tasks. ANLI The Adversarial Natural Language Inference (ANLI) dataset4 is a large-scale dataset for natural language inference (NLI) (Nie et al., 2020). It is collected via an iterative, adversarial humanand-model-in-the-loop procedure, making it more difficult than its predecessors. The dataset used here comprises approximately 100K samples for the training set, 1,200 for the development set, and 1,200 for the test set. Each sample contains a context, a hypothesis, and a label. The goal is to determine the logical relationship between the context and the hypothesis. The label is the assigned category indicating that relationship. In the context of NLI, the labels typically include \u201centailment\u201d, \u201ccontradiction\u201d, or \u201cneutral\u201d. Finally, ANLI makes available a reason (provided by the human-in-theloop), explaining why a sample was misclassified. HellaSwag This is a commonsense natural language inference dataset (Zellers et al., 2019), tasking machines with identifying the most probable followup for an event description. Comprising 70,000 instances, each scenario presents four potential outcomes, with only one being accurate. En4https://huggingface.co/datasets/anli gineered to be challenging for cutting-edge models, the dataset employs Adversarial Filtering to incorporate machine-generated incorrect responses, frequently misclassified by pretrained models. Covering diverse domains, HellaSwag demands a fusion of world knowledge and logical reasoning for successful interpretation. CommonsenseQA This is a multiple-choice question-answering dataset that requires different types of commonsense knowledge to predict the correct answers (Talmor et al., 2019). It contains 12,102 questions with one correct answer and four distractor answers. The questions are crowdsourced and cover a wide range of topics such as open-domain-qa, real-life situations, elementary science, social skills. CNN/Daily Mail The CNN/Daily Mail dataset is a widely used benchmark for text summarization (Nallapati et al., 2016). The dataset comprises news stories from CNN and Daily Mail websites. In total, the corpus contains 286,817 training, 13,368 validation, and 11,487 test pairs. 4.2 Evaluation measures For ANLI, HellaSwag, and CommonsenseQA, the performance matrix contains binary values (correct / incorrect answer). Hence, we use average accuracy to evaluate the performance of each model, as commonly done with these benchmarks (Nie et al., 2020; Wei et al., 2022; Zellers et al., 2019; Talmor et al., 2019). For CNN/Daily Mail, following previous work (See et al., 2017), we measure model performance using the ROUGE score. 4.3 Considered LLMs In order to have a diverse collection of LLMs, we include models from several developers, such as OpenAI and Meta. These include GPT LLMs (Brown et al., 2020; OpenAI, 2023), Llama LLMs (Touvron et al., 2023), and other popular LLMs, such as Falcon-180b (Almazrouei et al., 2023), Koala 13B (Geng et al., 2023), Alpaca 7B (Wang et al., 2023b). Table 1 shows the various models used for each benchmark5. 5Due to constraints in LLMs\u2019 availability, we use different LLMs for each benchmark. This does not impact our work, as each benchmark analysis is standalone and independent of the remaining benchmarks. 4.4 Performance evaluation For ANLI, we evaluate each model on the test dataset, which contains 1200 prompts. For each sample, we use 7 few-shot samples extracted from the ANLI dev set. For the remaining benchmarks, we randomly sample 10% of each benchmark for test and use the rest for few-shot selection. This results in 1005, 1221, and 1150 test samples for HellaSwag, CommonsenseQA, and CNN/Daily Mail respectively. For HellaSwag, we use 10 fewshot examples, while for CommonsenseQA and CNN/Daily Mail we use 5 few-shots. TypeModel ANLIHSCSQACNN/DM GPT ChatGPT-Turbo-Base-0516 \u2713 \u2713 ChatGPT-Turbo-0301 \u2713 \u2713 ChatGPT-Turbo-0613 \u2713 ChatGPT-202301 \u2713 DaVinci (GPT-3) \u2713 Text-Davinci-002 \u2713 Text-Davinci-003 \u2713 GPT-4-0314 \u2713 GPT-4-0314 (Chat) \u2713 \u2713 \u2713 GPT-4-0613 (Chat) \u2713 GPT-4-Turbo-1106 (Chat) \u2713 \u2713 \u2713 GPT-4-Turbo-1106 \u2713 Text-Alpha-002-Current \u2713 \u2713 DV3-FP8 \u2713 Babbage-0721 \u2713 ChatGPT-202301 \u2713 LLAMA Llama-13B \u2713 Llama-2-13B \u2713 \u2713 Llama-30B \u2713 \u2713 Llama-65B \u2713 Llama-2-70B \u2713 \u2713 \u2713 Other Persimmon 8B1 \u2713 \u2713 \u2713 Vicuna 13B2 \u2713 \u2713 Claude-23 \u2713 \u2713 \u2713 Falcon-180b \u2713 \u2713 Koala 13B \u2713 \u2713 Mistral7b4 \u2713 \u2713 Alpaca 7B \u2713 Total 12 13 14 8 1 https://www.adept.ai/blog/persimmon-8b 2 https://lmsys.org/blog/2023-03-30-vicuna/ 3 https://www.anthropic.com/index/claude-2 4 https://mistral.ai/news/announcing-mistral-7b/ Table 1: Summary of LLMs used for ANLI, HellaSwag (HS), CommonsenseQA (CSQA), and CNN/Daily Mail (CNN/DM). Check marks denote which LLMs were used for the specific benchmark. 5 Results In this section, we present the results of the experiments described in Section 3 on the benchmarks. 5.1 Performance vectors are correlated To determine if prompt performance vectors are correlated, we perform the permutation tests described in Section 3.2, using different correlation Hamming Cosine Jaccard ANLI Average 0.60 0.59 0.0009 75th percentile 0.66 0.0009 0.67 95th percentile 0.0009 0.0009 0.0009 KS test 2e-5 2e-5 2e-5 HS Average 0.52 0.57 0.0009 75th percentile 0.0009 0.0009 0.0009 95th percentile 0.88 0.85 0.87 KS test 2e-5 2e-5 2e-5 CSQA Average 0.53 0.52 0.0009 75th percentile 0.0009 0.0009 0.0029 95th percentile 0.0009 0.0009 0.0009 KS test 2e-5 2e-5 2e-5 Table 2: p-values obtained with permutation tests and the KS test using different correlation measures and aggregation functions for ANLI, HellaSwag (HS), and CommonsenseQA (CSQA). measures. The obtained p-values for ANLI, HellaSwag, and CommonsenseQA are depicted in Table 2. On ANLI and CommonsenseQA, the permutation tests show strong evidence that the correlations between the prompt performance vectors are significant. For HellaSwag, our findings reveal consistently low p-values across all correlation measures when using the 75th percentile, as well as a low p-value when averaging Jaccard similarities. For the three benchmarks above, the KS test is significant across all correlation measures. For CNN/Daily Mail, the performance matrix contains ROUGE scores, which are continuous values. Thus, we use cosine similarity to compare the average correlations obtained from the original and permuted performance matrices. The results show that the correlations among original performance vectors are significantly greater. To further support this finding, we cluster the model responses using spherical k-means (Dhillon and Modha, 2001). We choose the optimal number of clusters to maximize the average silhouette score, computed using cosine distance. Table 3 contains the average silhouette scores of clustering the performance vectors and a random permutation of them. For all benchmarks, the performance vectors produce higher silhouette scores compared to the permuted performance vectors. This provides additional evidence to support the outcome of the hypothesis tests presented above: the performance vectors are similar. 5.2 Impact of prompt weights on performance and relative ranking of models In this section, we present the results of different weighting schemes for the prompts of a benchmark, Benchmark observed permuted ANLI 0.52 0.21 HellaSwag 0.54 0.24 CommonsenseQA 0.61 0.29 CNN/Daily Mail 0.25 0.21 Table 3: Average silhouette score of clustering observed performance vectors and a random permutation of performance vectors for the various benchmarks. as described in Section 3.3. 5.2.1 Cluster-based evaluation First, we cluster the performance vectors of each benchmark as described earlier. Then, we compute the average accuracy of models for each benchmark, using only the cluster representatives of that benchmark. We also compute weighted performance using distance-based and inverse-distancebased weights. Figure 1 illustrates how these weighting schemes affect the relative ranking of models for each benchmark. The rows correspond to different weighting schemes, while the columns correspond to the different models and are ordered by increasing original performance (i.e., decreasing rank). Every cell contains the ranking change (compared to the original benchmark) of the model of that column for the method of that row. If there were no ranking changes, all values would be 0. However, we observe that there are multiple ranking changes as great as 5 (model is ranked 5 positions above the original benchmark). 5.2.2 Increasing size of benchmark Next, we study how performance is affected by the size and diversity of the benchmark. We start with a random prompt and iteratively add new prompts to the benchmark, either by adding the most informative prompt (i.e., the one with the maximum average distance from the current benchmark), or a random one. Figure 2 shows the average performance for each model as the benchmark size increases (maximum benchmark size corresponds to the original benchmark). Looking at the most informative method for ANLI (Figure 2a), the first 400 prompts result in random performance (0.5) for all models. This suggests that the initial prompts chosen with this method are the most \u201cdifficult\u201d, in that the models are exhibiting performance close to random (accuracy 50%). Similar results are observed for HellaSwag and CommonsenseQA (see Appendix C, Figure 9), but not for CNN/Daily Mail (Figure 2b), where the performance on the reduced benchmark follows a similar pattern as the ChatGPT-T urbo-0301 ChatGPT-T urbo-Base-0516 Claude-2 Falcon-180b GPT-4-0314 (Chat) GPT-4-T urbo-1106 (Chat) Koala 13B Llama-2 13B Llama-2 70B Mistral 7B Persimmon 8B Vicuna 13B distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means 0 0 0 1 0 0 -2 1 0 0 1 -3 -1 0 0 1 1 1 0 0 -1 -1 -1 0 -1 0 -3 0 1 0 0 0 1 0 2 -1 1 0 1 -1 -1 0 0 -1 0 0 1 0 (a) ANLI Alpaca 7B HF ChatGPT-T urbo-0301 ChatGPT-T urbo-Base-0516 Claude-2.0 GPT-4-0314 (Chat) GPT-4-T urbo-1106 (Chat) Koala 13B HF Llama-2-13B Llama-2-70B Llama-30B Persimmon-8b T ext-Alpha-002-Current Vicuna 13B HF distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means -1 0 -1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 -1 -3 0 0 0 2 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 1 0 0 0 (b) HellaSwag ChatGPT-T urbo-0613 Claude-2.0 DaVinci (GPT-3) Falcon-180b GPT-4-0314 GPT-4-0613 (Chat) GPT-4-T urbo-1106 GPT-4-T urbo-1106 (Chat) Llama-13B Llama-2-70B Llama-30B Llama-65B Mistral-7b Persimmon-8b distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means -1 -1 -2 0 2 1 -2 2 0 0 0 0 -4 0 5 0 2 0 1 0 1 1 1 1 1 -1 -2 0 4 1 -1 0 0 0 0 0 -2 1 0 1 0 1 -1 0 0 0 -1 1 -2 -2 0 -5 0 0 0 0 (c) CommonsenseQA Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 DV3-FP8 GPT-4-0314 (Chat) T ext-Alpha-002-Current T ext-DaVinci-002 T ext-DaVinci-003 distance weights inverse distance weights medoids HDBSCAN medoids spherical k-means 0 0 0 0 0 0 0 -1 0 0 -1 1 0 0 -1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 (d) CNN/Daily Mail Figure 1: Visualization of ranking changes (compared to original benchmark) for various benchmark modifications. Rows show different weighting methods, columns show the models. Each cell contains the ranking change (original ranking minus new ranking) of the column-model for the row-method. We observe rank changes as great as 5. Persimmon 8B Llama-2 13B Koala 13B Mistral 7B Vicuna 13B Claude-2 ChatGPT-T urbo-Base-0516 Falcon-180b Llama-2 70B ChatGPT-T urbo-0301 GPT-4-T urbo-1106 (Chat) GPT-4-0314 (Chat) 0.4 0.5 0.6 0.7 Accuracy T est set size 100 200 300 400 500 600 700 800 900 1000 1100 1200 (a) ANLI Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 T ext-DaVinci-002 T ext-DaVinci-003 GPT-4-0314 (Chat) DV3-FP8 T ext-Alpha-002-Current 0.25 0.30 0.35 0.40 Accuracy T est set size 100 200 300 400 500 600 700 800 900 1000 1100 (b) CNN/Daily Mail Figure 2: Average performance as benchmark size increases. Prompts are added to maximize average cosine distance. Maximum benchmark size corresponds to performance on the original benchmark. performance on the original benchmark. The random method tracks the original performance for all benchmarks (see Appendix C, Figure 10). 5.2.3 Random distributions of weights We explore the distribution of all weighting schemes and the effect they have on the weighted accuracy and relative ranking of the models. As described in Section 3.3, we sample 100,000 random weight configurations. For each model, we compute the weighted performance based on these weights. For ANLI, HellaSwag, and CommonsenseQA the performance of a model can change up to 10%. For CNN/Daily Mail, the range is smaller, up to 3%. Detailed results are included in Appendix D. We note that the range is similar for all models within a benchmark, indicating that it is a property related to the benchmark and not the specific models. To further demonstrate changes in relative ranking of models, we take a closer look at the pairwise ranking differences. Figure 3 depicts a pairwise comparison of weighted performance for each benchmark. Every cell shows how often the model in the row outperforms the model of the column. For ANLI, approximately for half of the weight configurations the ranking of the top two models is reversed! However, for the CNN/Daily Mail data, there are effectively no reversals (less than 0.01%). 5.3 Relationship between model performance and semantic similarity of prompts Having established that model performance is correlated across prompts, we investigate what can explain these correlations. Our hypothesis is that it is driven by semantic similarity. We use the method described in Section 3.4 to assess if there is a significant relationship between semantic similarity and model performance similarity. Our findings show that only CNN/Daily Mail presents a significant relationship between prompt semantic similarity and prompt performance similarity (see Figure 4d). This benchmark is a text summarization task, where the success of the ROUGE Persimmon 8B Llama-2 13B Koala 13B Mistral 7B Vicuna 13B Claude-2 ChatGPT-T urbo-Base-0516 Falcon-180b Llama-2 70B ChatGPT-T urbo-0301 GPT-4-T urbo-1106 (Chat) GPT-4-0314 (Chat) Persimmon 8B Llama-2 13B Koala 13B Mistral 7B Vicuna 13B Claude-2 ChatGPT-T urbo-Base-0516 Falcon-180b Llama-2 70B ChatGPT-T urbo-0301 GPT-4-T urbo-1106 (Chat) GPT-4-0314 (Chat) 0.0% 21.9% 0.2% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 78.1% 0.0% 0.5% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 99.8% 99.5% 0.0% 21.2% 2.3% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 78.8% 0.0% 8.7% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 97.7% 91.3% 0.0% 10.7% 0.8% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 89.3% 0.0% 10.4% 0.5% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 99.2% 89.6% 0.0% 12.9% 0.4% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 99.5% 87.1% 0.0% 1.5% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 99.6% 98.5% 0.0% 3.4% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 96.6% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 0.0% 44.9% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 55.1% 0.0% Proportion where row > column 0 25 50 75 100 (a) ANLI Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 T ext-DaVinci-002 T ext-DaVinci-003 GPT-4-0314 (Chat) DV3-FP8 T ext-Alpha-002-Current Babbage-0721 (GPT-3) ChatGPT-202301 ChatGPT-T urbo-0301 T ext-DaVinci-002 T ext-DaVinci-003 GPT-4-0314 (Chat) DV3-FP8 T ext-Alpha-002-Current 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 0.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 0.0% 0.0% 0.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 0.0% 0.3% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 99.7% 0.0% Proportion where row > column 0 25 50 75 100 (b) CNN/Daily Mail Figure 3: Pairwise comparison of weighted performance. Each cell is the percentage of times the model of the row outperforms the model of the column. metric highly depends on the ability to extract relevant entities from text. For example, we find that prompts referring to the economy or global warming have high correlation in model performance (see Appendix B, Table 5). ANLI also makes available a reason component: what human agents state as the explanation for why the LLM gave a wrong answer. We find a significant relationship between semantic similarity using the reason component and prompt performance similarity (as seen in Figure 4a). The input prompt\u2014consisting of the context, hypothesis and label components\u2014shows no relationship, which is most likely because the creators of ANLI put great effort into ensuring diversity in the benchmark (Nie et al., 2020). This is also evident in Figure 2. The significance of the reason component indicates that the model performance vectors correlate because of how the model generates a response. We observe prompts where the reasons for similar model performance indicate that the model cannot do math, e.g., \u201cThe system may have missed this as it did not add up the losses from both sets\u201d and \u201cthe model might not know math\u201d (see Appendix B, Table 4). Hellaswag and CommonsenseQA use a multipleCoefficient FDR -2.5 0.0 2.5 5.0 0.00 0.25 0.50 0.75 1.00 0 5 10 15 20 0.0 0.1 0.2 0.3 0.4 density (a) ANLI (reason) Coefficient FDR -20 -10 0 10 20 0.00 0.25 0.50 0.75 1.00 0 10 20 0.0 0.1 0.2 0.3 density (b) HellaSwag Coefficient FDR -10 0 10 0.00 0.25 0.50 0.75 1.00 0 10 20 30 40 0.0 0.1 0.2 density (c) CommonsenseQA Coefficient FDR -2 -1 0 1 2 3 0.00 0.25 0.50 0.75 1.00 0 10 20 30 0.0 0.2 0.4 0.6 density (d) CNN/Daily Mail Figure 4: Distribution of semantic similarity coefficients and FDRs for all benchmarks. Red is original data, blue is permutations. KS tests for all distributions shown have p-values < 2e-5. choice format. The lack of strong evidence supporting the correlation in these benchmarks (see Figures 4b and 4c) is likely due to the embeddings picking up similarities between the different choices, rather than the logic the LLMs employ to arrive at their conclusion. This is consistent with our findings for ANLI, where a significant relationship does not stem from inputs to the model, but from the LLMs\u2019 failure points. Our findings indicate there is a larger question about why the model performance vectors are correlated, and investigating this is central to understanding model performance. Semantic similarity can be a factor, but it depends on the task the benchmark is designed for. Based on our results for ANLI, it appears that the reasoning required for the task (i.e., reasoning types that cause models to fail), can be even more important than semantic similarity. 6 Conclusion and future work LLMs are commonly evaluated on benchmarks that may include multiple prompts testing similar skills. In this work, we demonstrate this bias on major benchmarks, by showing that model performance across different prompts is significantly correlated. Furthermore, we demonstrate that LLM comparative studies can be significantly altered when using non-uniform weights for prompts during evaluation. The suggested approach can serve as a consistency check in comparative studies of LLMs, ensuring that the results take into consideration benchmark biases. Finally, we show that similar model performance across prompts can be explained by semantic similarity, but is most likely derived from common failure points of the LLM. Our findings could influence a larger diagnostics tool for evaluating the robustness of model quality comparisons with respect to distributional assumptions of benchmarks. Future work also includes identifying additional factors that may explain these biases. This information can give rise to solutions for improving benchmarks robustness. These findings could help researchers generating novel benchmarks to identify and eliminate biases. 7 Limitations Our study requires access to multiple LLMs to generate model performance vectors for each prompt in a benchmark. This can be computationally expensive and require GPUs. Some models, such as OpenAI\u2019s GPT-4, have limited API calls, making data collection time consuming. While we provide a novel approach for researchers to investigate bias in their own studies, providing a comprehensive de-biasing methodology is not within the scope of this work. Finally, we have only touched the surface on why prompts have similar performance across multiple LLMs. There are many other components to investigate, such as the length of the prompt and prompt complexity. This information could be leveraged to propose solutions on improving benchmarks, without running prompts through multiple LLMs."
},
{
"url": "http://arxiv.org/abs/2306.15261v1",
"title": "A Survey on Out-of-Distribution Evaluation of Neural NLP Models",
"abstract": "Adversarial robustness, domain generalization and dataset biases are three\nactive lines of research contributing to out-of-distribution (OOD) evaluation\non neural NLP models. However, a comprehensive, integrated discussion of the\nthree research lines is still lacking in the literature. In this survey, we 1)\ncompare the three lines of research under a unifying definition; 2) summarize\nthe data-generating processes and evaluation protocols for each line of\nresearch; and 3) emphasize the challenges and opportunities for future work.",
"authors": "Xinzhe Li, Ming Liu, Shang Gao, Wray Buntine",
"published": "2023-06-27",
"updated": "2023-06-27",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2211.08073v4",
"title": "GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective",
"abstract": "Pre-trained language models (PLMs) are known to improve the generalization\nperformance of natural language understanding models by leveraging large\namounts of data during the pre-training phase. However, the out-of-distribution\n(OOD) generalization problem remains a challenge in many NLP tasks, limiting\nthe real-world deployment of these methods. This paper presents the first\nattempt at creating a unified benchmark named GLUE-X for evaluating OOD\nrobustness in NLP models, highlighting the importance of OOD robustness and\nproviding insights on how to measure the robustness of a model and how to\nimprove it. The benchmark includes 13 publicly available datasets for OOD\ntesting, and evaluations are conducted on 8 classic NLP tasks over 21 popularly\nused PLMs, including GPT-3 and GPT-3.5. Our findings confirm the need for\nimproved OOD accuracy in NLP tasks, as significant performance degradation was\nobserved in all settings compared to in-distribution (ID) accuracy.",
"authors": "Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang",
"published": "2022-11-15",
"updated": "2023-05-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG",
"cs.PF"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/1612.00837v3",
"title": "Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering",
"abstract": "Problems at the intersection of vision and language are of significant\nimportance both as challenging research questions and for the rich set of\napplications they enable. However, inherent structure in our world and bias in\nour language tend to be a simpler signal for learning than visual modalities,\nresulting in models that ignore visual information, leading to an inflated\nsense of their capability.\n We propose to counter these language priors for the task of Visual Question\nAnswering (VQA) and make vision (the V in VQA) matter! Specifically, we balance\nthe popular VQA dataset by collecting complementary images such that every\nquestion in our balanced dataset is associated with not just a single image,\nbut rather a pair of similar images that result in two different answers to the\nquestion. Our dataset is by construction more balanced than the original VQA\ndataset and has approximately twice the number of image-question pairs. Our\ncomplete balanced dataset is publicly available at www.visualqa.org as part of\nthe 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA\nv2.0).\n We further benchmark a number of state-of-art VQA models on our balanced\ndataset. All models perform significantly worse on our balanced dataset,\nsuggesting that these models have indeed learned to exploit language priors.\nThis finding provides the first concrete empirical evidence for what seems to\nbe a qualitative sense among practitioners.\n Finally, our data collection protocol for identifying complementary images\nenables us to develop a novel interpretable model, which in addition to\nproviding an answer to the given (image, question) pair, also provides a\ncounter-example based explanation. Specifically, it identifies an image that is\nsimilar to the original image, but it believes has a different answer to the\nsame question. This can help in building trust for machines among their users.",
"authors": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh",
"published": "2016-12-02",
"updated": "2017-05-15",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2112.08313v2",
"title": "Measure and Improve Robustness in NLP Models: A Survey",
"abstract": "As NLP models achieved state-of-the-art performances over benchmarks and\ngained wide applications, it has been increasingly important to ensure the safe\ndeployment of these models in the real world, e.g., making sure the models are\nrobust against unseen or challenging scenarios. Despite robustness being an\nincreasingly studied topic, it has been separately explored in applications\nlike vision and NLP, with various definitions, evaluation and mitigation\nstrategies in multiple lines of research. In this paper, we aim to provide a\nunifying survey of how to define, measure and improve robustness in NLP. We\nfirst connect multiple definitions of robustness, then unify various lines of\nwork on identifying robustness failures and evaluating models' robustness.\nCorrespondingly, we present mitigation strategies that are data-driven,\nmodel-driven, and inductive-prior-based, with a more systematic view of how to\neffectively improve robustness in NLP models. Finally, we conclude by outlining\nopen challenges and future directions to motivate further research in this\narea.",
"authors": "Xuezhi Wang, Haohan Wang, Diyi Yang",
"published": "2021-12-15",
"updated": "2022-05-09",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2310.19736v3",
"title": "Evaluating Large Language Models: A Comprehensive Survey",
"abstract": "Large language models (LLMs) have demonstrated remarkable capabilities across\na broad spectrum of tasks. They have attracted significant attention and been\ndeployed in numerous downstream applications. Nevertheless, akin to a\ndouble-edged sword, LLMs also present potential risks. They could suffer from\nprivate data leaks or yield inappropriate, harmful, or misleading content.\nAdditionally, the rapid progress of LLMs raises concerns about the potential\nemergence of superintelligent systems without adequate safeguards. To\neffectively capitalize on LLM capacities as well as ensure their safe and\nbeneficial development, it is critical to conduct a rigorous and comprehensive\nevaluation of LLMs.\n This survey endeavors to offer a panoramic perspective on the evaluation of\nLLMs. We categorize the evaluation of LLMs into three major groups: knowledge\nand capability evaluation, alignment evaluation and safety evaluation. In\naddition to the comprehensive review on the evaluation methodologies and\nbenchmarks on these three aspects, we collate a compendium of evaluations\npertaining to LLMs' performance in specialized domains, and discuss the\nconstruction of comprehensive evaluation platforms that cover LLM evaluations\non capabilities, alignment, safety, and applicability.\n We hope that this comprehensive overview will stimulate further research\ninterests in the evaluation of LLMs, with the ultimate goal of making\nevaluation serve as a cornerstone in guiding the responsible development of\nLLMs. We envision that this will channel their evolution into a direction that\nmaximizes societal benefit while minimizing potential risks. A curated list of\nrelated papers has been publicly available at\nhttps://github.com/tjunlp-lab/Awesome-LLMs-Evaluation-Papers.",
"authors": "Zishan Guo, Renren Jin, Chuang Liu, Yufei Huang, Dan Shi, Supryadi, Linhao Yu, Yan Liu, Jiaxuan Li, Bojian Xiong, Deyi Xiong",
"published": "2023-10-30",
"updated": "2023-11-25",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2306.04528v4",
"title": "PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts",
"abstract": "The increasing reliance on Large Language Models (LLMs) across academia and\nindustry necessitates a comprehensive understanding of their robustness to\nprompts. In response to this vital need, we introduce PromptBench, a robustness\nbenchmark designed to measure LLMs' resilience to adversarial prompts. This\nstudy uses a plethora of adversarial textual attacks targeting prompts across\nmultiple levels: character, word, sentence, and semantic. The adversarial\nprompts, crafted to mimic plausible user errors like typos or synonyms, aim to\nevaluate how slight deviations can affect LLM outcomes while maintaining\nsemantic integrity. These prompts are then employed in diverse tasks, such as\nsentiment analysis, natural language inference, reading comprehension, machine\ntranslation, and math problem-solving. Our study generates 4788 adversarial\nprompts, meticulously evaluated over 8 tasks and 13 datasets. Our findings\ndemonstrate that contemporary LLMs are not robust to adversarial prompts.\nFurthermore, we present comprehensive analysis to understand the mystery behind\nprompt robustness and its transferability. We then offer insightful robustness\nanalysis and pragmatic recommendations for prompt composition, beneficial to\nboth researchers and everyday users. Code is available at:\nhttps://github.com/microsoft/promptbench.",
"authors": "Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Yue Zhang, Neil Zhenqiang Gong, Xing Xie",
"published": "2023-06-07",
"updated": "2023-10-18",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CR",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/1803.02324v2",
"title": "Annotation Artifacts in Natural Language Inference Data",
"abstract": "Large-scale datasets for natural language inference are created by presenting\ncrowd workers with a sentence (premise), and asking them to generate three new\nsentences (hypotheses) that it entails, contradicts, or is logically neutral\nwith respect to. We show that, in a significant portion of such data, this\nprotocol leaves clues that make it possible to identify the label by looking\nonly at the hypothesis, without observing the premise. Specifically, we show\nthat a simple text categorization model can correctly classify the hypothesis\nalone in about 67% of SNLI (Bowman et. al, 2015) and 53% of MultiNLI (Williams\net. al, 2017). Our analysis reveals that specific linguistic phenomena such as\nnegation and vagueness are highly correlated with certain inference classes.\nOur findings suggest that the success of natural language inference models to\ndate has been overestimated, and that the task remains a hard open problem.",
"authors": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, Noah A. Smith",
"published": "2018-03-06",
"updated": "2018-04-16",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2205.12548v3",
"title": "RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning",
"abstract": "Prompting has shown impressive success in enabling large pretrained language\nmodels (LMs) to perform diverse NLP tasks, especially when only few downstream\ndata are available. Automatically finding the optimal prompt for each task,\nhowever, is challenging. Most existing work resorts to tuning soft prompt\n(e.g., embeddings) which falls short of interpretability, reusability across\nLMs, and applicability when gradients are not accessible. Discrete prompt, on\nthe other hand, is difficult to optimize, and is often created by \"enumeration\n(e.g., paraphrasing)-then-selection\" heuristics that do not explore the prompt\nspace systematically. This paper proposes RLPrompt, an efficient discrete\nprompt optimization approach with reinforcement learning (RL). RLPrompt\nformulates a parameter-efficient policy network that generates the desired\ndiscrete prompt after training with reward. To overcome the complexity and\nstochasticity of reward signals by the large LM environment, we incorporate\neffective reward stabilization that substantially enhances the training\nefficiency. RLPrompt is flexibly applicable to different types of LMs, such as\nmasked (e.g., BERT) and left-to-right models (e.g., GPTs), for both\nclassification and generation tasks. Experiments on few-shot classification and\nunsupervised text style transfer show superior performance over a wide range of\nexisting finetuning or prompting methods. Interestingly, the resulting\noptimized prompts are often ungrammatical gibberish text; and surprisingly,\nthose gibberish prompts are transferrable between different LMs to retain\nsignificant performance, indicating LM prompting may not follow human language\npatterns.",
"authors": "Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Xing, Zhiting Hu",
"published": "2022-05-25",
"updated": "2022-10-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2002.04108v3",
"title": "Adversarial Filters of Dataset Biases",
"abstract": "Large neural models have demonstrated human-level performance on language and\nvision benchmarks, while their performance degrades considerably on adversarial\nor out-of-distribution samples. This raises the question of whether these\nmodels have learned to solve a dataset rather than the underlying task by\noverfitting to spurious dataset biases. We investigate one recently proposed\napproach, AFLite, which adversarially filters such dataset biases, as a means\nto mitigate the prevalent overestimation of machine performance. We provide a\ntheoretical understanding for AFLite, by situating it in the generalized\nframework for optimum bias reduction. We present extensive supporting evidence\nthat AFLite is broadly applicable for reduction of measurable dataset biases,\nand that models trained on the filtered datasets yield better generalization to\nout-of-distribution tasks. Finally, filtering results in a large drop in model\nperformance (e.g., from 92% to 62% for SNLI), while human performance still\nremains high. Our work thus shows that such filtered datasets can pose new\nresearch challenges for robust generalization by serving as upgraded\nbenchmarks.",
"authors": "Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwal, Yejin Choi",
"published": "2020-02-10",
"updated": "2020-07-11",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2002.04108v3",
"title": "Adversarial Filters of Dataset Biases",
"abstract": "Large neural models have demonstrated human-level performance on language and\nvision benchmarks, while their performance degrades considerably on adversarial\nor out-of-distribution samples. This raises the question of whether these\nmodels have learned to solve a dataset rather than the underlying task by\noverfitting to spurious dataset biases. We investigate one recently proposed\napproach, AFLite, which adversarially filters such dataset biases, as a means\nto mitigate the prevalent overestimation of machine performance. We provide a\ntheoretical understanding for AFLite, by situating it in the generalized\nframework for optimum bias reduction. We present extensive supporting evidence\nthat AFLite is broadly applicable for reduction of measurable dataset biases,\nand that models trained on the filtered datasets yield better generalization to\nout-of-distribution tasks. Finally, filtering results in a large drop in model\nperformance (e.g., from 92% to 62% for SNLI), while human performance still\nremains high. Our work thus shows that such filtered datasets can pose new\nresearch challenges for robust generalization by serving as upgraded\nbenchmarks.",
"authors": "Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwal, Yejin Choi",
"published": "2020-02-10",
"updated": "2020-07-11",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2402.01781v1",
"title": "When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards",
"abstract": "Large Language Model (LLM) leaderboards based on benchmark rankings are\nregularly used to guide practitioners in model selection. Often, the published\nleaderboard rankings are taken at face value - we show this is a (potentially\ncostly) mistake. Under existing leaderboards, the relative performance of LLMs\nis highly sensitive to (often minute) details. We show that for popular\nmultiple choice question benchmarks (e.g. MMLU) minor perturbations to the\nbenchmark, such as changing the order of choices or the method of answer\nselection, result in changes in rankings up to 8 positions. We explain this\nphenomenon by conducting systematic experiments over three broad categories of\nbenchmark perturbations and identifying the sources of this behavior. Our\nanalysis results in several best-practice recommendations, including the\nadvantage of a hybrid scoring method for answer selection. Our study highlights\nthe dangers of relying on simple benchmark evaluations and charts the path for\nmore robust evaluation schemes on the existing benchmarks.",
"authors": "Norah Alzahrani, Hisham Abdullah Alyahya, Yazeed Alnumay, Sultan Alrashed, Shaykhah Alsubaie, Yusef Almushaykeh, Faisal Mirza, Nouf Alotaibi, Nora Altwairesh, Areeb Alowisheq, M Saiful Bari, Haidar Khan",
"published": "2024-02-01",
"updated": "2024-02-01",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2302.09664v3",
"title": "Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation",
"abstract": "We introduce a method to measure uncertainty in large language models. For\ntasks like question answering, it is essential to know when we can trust the\nnatural language outputs of foundation models. We show that measuring\nuncertainty in natural language is challenging because of \"semantic\nequivalence\" -- different sentences can mean the same thing. To overcome these\nchallenges we introduce semantic entropy -- an entropy which incorporates\nlinguistic invariances created by shared meanings. Our method is unsupervised,\nuses only a single model, and requires no modifications to off-the-shelf\nlanguage models. In comprehensive ablation studies we show that the semantic\nentropy is more predictive of model accuracy on question answering data sets\nthan comparable baselines.",
"authors": "Lorenz Kuhn, Yarin Gal, Sebastian Farquhar",
"published": "2023-02-19",
"updated": "2023-04-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/1907.07355v2",
"title": "Probing Neural Network Comprehension of Natural Language Arguments",
"abstract": "We are surprised to find that BERT's peak performance of 77% on the Argument\nReasoning Comprehension Task reaches just three points below the average\nuntrained human baseline. However, we show that this result is entirely\naccounted for by exploitation of spurious statistical cues in the dataset. We\nanalyze the nature of these cues and demonstrate that a range of models all\nexploit them. This analysis informs the construction of an adversarial dataset\non which all models achieve random accuracy. Our adversarial dataset provides a\nmore robust assessment of argument comprehension and should be adopted as the\nstandard in future work.",
"authors": "Timothy Niven, Hung-Yu Kao",
"published": "2019-07-17",
"updated": "2019-09-16",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2101.04840v1",
"title": "Robustness Gym: Unifying the NLP Evaluation Landscape",
"abstract": "Despite impressive performance on standard benchmarks, deep neural networks\nare often brittle when deployed in real-world systems. Consequently, recent\nresearch has focused on testing the robustness of such models, resulting in a\ndiverse set of evaluation methodologies ranging from adversarial attacks to\nrule-based data transformations. In this work, we identify challenges with\nevaluating NLP systems and propose a solution in the form of Robustness Gym\n(RG), a simple and extensible evaluation toolkit that unifies 4 standard\nevaluation paradigms: subpopulations, transformations, evaluation sets, and\nadversarial attacks. By providing a common platform for evaluation, Robustness\nGym enables practitioners to compare results from all 4 evaluation paradigms\nwith just a few clicks, and to easily develop and share novel evaluation\nmethods using a built-in set of abstractions. To validate Robustness Gym's\nutility to practitioners, we conducted a real-world case study with a\nsentiment-modeling team, revealing performance degradations of 18%+. To verify\nthat Robustness Gym can aid novel research analyses, we perform the first study\nof state-of-the-art commercial and academic named entity linking (NEL) systems,\nas well as a fine-grained analysis of state-of-the-art summarization models.\nFor NEL, commercial systems struggle to link rare entities and lag their\nacademic counterparts by 10%+, while state-of-the-art summarization models\nstruggle on examples that require abstraction and distillation, degrading by\n9%+. Robustness Gym can be found at https://robustnessgym.com/",
"authors": "Karan Goel, Nazneen Rajani, Jesse Vig, Samson Tan, Jason Wu, Stephan Zheng, Caiming Xiong, Mohit Bansal, Christopher R\u00e9",
"published": "2021-01-13",
"updated": "2021-01-13",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2305.03495v2",
"title": "Automatic Prompt Optimization with \"Gradient Descent\" and Beam Search",
"abstract": "Large Language Models (LLMs) have shown impressive performance as general\npurpose agents, but their abilities remain highly dependent on prompts which\nare hand written with onerous trial-and-error effort. We propose a simple and\nnonparametric solution to this problem, Automatic Prompt Optimization (APO),\nwhich is inspired by numerical gradient descent to automatically improve\nprompts, assuming access to training data and an LLM API. The algorithm uses\nminibatches of data to form natural language \"gradients\" that criticize the\ncurrent prompt. The gradients are then \"propagated\" into the prompt by editing\nthe prompt in the opposite semantic direction of the gradient. These gradient\ndescent steps are guided by a beam search and bandit selection procedure which\nsignificantly improves algorithmic efficiency. Preliminary results across three\nbenchmark NLP tasks and the novel problem of LLM jailbreak detection suggest\nthat Automatic Prompt Optimization can outperform prior prompt editing\ntechniques and improve an initial prompt's performance by up to 31%, by using\ndata to rewrite vague task descriptions into more precise annotation\ninstructions.",
"authors": "Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, Michael Zeng",
"published": "2023-05-04",
"updated": "2023-10-19",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2211.08073v4",
"title": "GLUE-X: Evaluating Natural Language Understanding Models from an Out-of-distribution Generalization Perspective",
"abstract": "Pre-trained language models (PLMs) are known to improve the generalization\nperformance of natural language understanding models by leveraging large\namounts of data during the pre-training phase. However, the out-of-distribution\n(OOD) generalization problem remains a challenge in many NLP tasks, limiting\nthe real-world deployment of these methods. This paper presents the first\nattempt at creating a unified benchmark named GLUE-X for evaluating OOD\nrobustness in NLP models, highlighting the importance of OOD robustness and\nproviding insights on how to measure the robustness of a model and how to\nimprove it. The benchmark includes 13 publicly available datasets for OOD\ntesting, and evaluations are conducted on 8 classic NLP tasks over 21 popularly\nused PLMs, including GPT-3 and GPT-3.5. Our findings confirm the need for\nimproved OOD accuracy in NLP tasks, as significant performance degradation was\nobserved in all settings compared to in-distribution (ID) accuracy.",
"authors": "Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang",
"published": "2022-11-15",
"updated": "2023-05-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG",
"cs.PF"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/1803.02324v2",
"title": "Annotation Artifacts in Natural Language Inference Data",
"abstract": "Large-scale datasets for natural language inference are created by presenting\ncrowd workers with a sentence (premise), and asking them to generate three new\nsentences (hypotheses) that it entails, contradicts, or is logically neutral\nwith respect to. We show that, in a significant portion of such data, this\nprotocol leaves clues that make it possible to identify the label by looking\nonly at the hypothesis, without observing the premise. Specifically, we show\nthat a simple text categorization model can correctly classify the hypothesis\nalone in about 67% of SNLI (Bowman et. al, 2015) and 53% of MultiNLI (Williams\net. al, 2017). Our analysis reveals that specific linguistic phenomena such as\nnegation and vagueness are highly correlated with certain inference classes.\nOur findings suggest that the success of natural language inference models to\ndate has been overestimated, and that the task remains a hard open problem.",
"authors": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, Noah A. Smith",
"published": "2018-03-06",
"updated": "2018-04-16",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2311.01964v1",
"title": "Don't Make Your LLM an Evaluation Benchmark Cheater",
"abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.",
"authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han",
"published": "2023-11-03",
"updated": "2023-11-03",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/1907.07355v2",
"title": "Probing Neural Network Comprehension of Natural Language Arguments",
"abstract": "We are surprised to find that BERT's peak performance of 77% on the Argument\nReasoning Comprehension Task reaches just three points below the average\nuntrained human baseline. However, we show that this result is entirely\naccounted for by exploitation of spurious statistical cues in the dataset. We\nanalyze the nature of these cues and demonstrate that a range of models all\nexploit them. This analysis informs the construction of an adversarial dataset\non which all models achieve random accuracy. Our adversarial dataset provides a\nmore robust assessment of argument comprehension and should be adopted as the\nstandard in future work.",
"authors": "Timothy Niven, Hung-Yu Kao",
"published": "2019-07-17",
"updated": "2019-09-16",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2211.09110v2",
"title": "Holistic Evaluation of Language Models",
"abstract": "Language models (LMs) are becoming the foundation for almost all major\nlanguage technologies, but their capabilities, limitations, and risks are not\nwell understood. We present Holistic Evaluation of Language Models (HELM) to\nimprove the transparency of language models. First, we taxonomize the vast\nspace of potential scenarios (i.e. use cases) and metrics (i.e. desiderata)\nthat are of interest for LMs. Then we select a broad subset based on coverage\nand feasibility, noting what's missing or underrepresented (e.g. question\nanswering for neglected English dialects, metrics for trustworthiness). Second,\nwe adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration,\nrobustness, fairness, bias, toxicity, and efficiency) for each of 16 core\nscenarios when possible (87.5% of the time). This ensures metrics beyond\naccuracy don't fall to the wayside, and that trade-offs are clearly exposed. We\nalso perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze\nspecific aspects (e.g. reasoning, disinformation). Third, we conduct a\nlarge-scale evaluation of 30 prominent language models (spanning open,\nlimited-access, and closed models) on all 42 scenarios, 21 of which were not\npreviously used in mainstream LM evaluation. Prior to HELM, models on average\nwere evaluated on just 17.9% of the core HELM scenarios, with some prominent\nmodels not sharing a single scenario in common. We improve this to 96.0%: now\nall 30 models have been densely benchmarked on the same core scenarios and\nmetrics under standardized conditions. Our evaluation surfaces 25 top-level\nfindings. For full transparency, we release all raw model prompts and\ncompletions publicly for further analysis, as well as a general modular\ntoolkit. We intend for HELM to be a living benchmark for the community,\ncontinuously updated with new scenarios, metrics, and models.",
"authors": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher R\u00e9, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda",
"published": "2022-11-16",
"updated": "2023-10-01",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2302.09664v3",
"title": "Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation",
"abstract": "We introduce a method to measure uncertainty in large language models. For\ntasks like question answering, it is essential to know when we can trust the\nnatural language outputs of foundation models. We show that measuring\nuncertainty in natural language is challenging because of \"semantic\nequivalence\" -- different sentences can mean the same thing. To overcome these\nchallenges we introduce semantic entropy -- an entropy which incorporates\nlinguistic invariances created by shared meanings. Our method is unsupervised,\nuses only a single model, and requires no modifications to off-the-shelf\nlanguage models. In comprehensive ablation studies we show that the semantic\nentropy is more predictive of model accuracy on question answering data sets\nthan comparable baselines.",
"authors": "Lorenz Kuhn, Yarin Gal, Sebastian Farquhar",
"published": "2023-02-19",
"updated": "2023-04-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2310.17623v2",
"title": "Proving Test Set Contamination in Black Box Language Models",
"abstract": "Large language models are trained on vast amounts of internet data, prompting\nconcerns and speculation that they have memorized public benchmarks. Going from\nspeculation to proof of contamination is challenging, as the pretraining data\nused by proprietary models are often not publicly accessible. We show that it\nis possible to provide provable guarantees of test set contamination in\nlanguage models without access to pretraining data or model weights. Our\napproach leverages the fact that when there is no data contamination, all\norderings of an exchangeable benchmark should be equally likely. In contrast,\nthe tendency for language models to memorize example order means that a\ncontaminated language model will find certain canonical orderings to be much\nmore likely than others. Our test flags potential contamination whenever the\nlikelihood of a canonically ordered benchmark dataset is significantly higher\nthan the likelihood after shuffling the examples. We demonstrate that our\nprocedure is sensitive enough to reliably prove test set contamination in\nchallenging situations, including models as small as 1.4 billion parameters, on\nsmall test sets of only 1000 examples, and datasets that appear only a few\ntimes in the pretraining corpus. Using our test, we audit five popular publicly\naccessible language models for test set contamination and find little evidence\nfor pervasive contamination.",
"authors": "Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, Tatsunori B. Hashimoto",
"published": "2023-10-26",
"updated": "2023-11-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2103.03097v7",
"title": "Generalizing to Unseen Domains: A Survey on Domain Generalization",
"abstract": "Machine learning systems generally assume that the training and testing\ndistributions are the same. To this end, a key requirement is to develop models\nthat can generalize to unseen distributions. Domain generalization (DG), i.e.,\nout-of-distribution generalization, has attracted increasing interests in\nrecent years. Domain generalization deals with a challenging setting where one\nor several different but related domain(s) are given, and the goal is to learn\na model that can generalize to an unseen test domain. Great progress has been\nmade in the area of domain generalization for years. This paper presents the\nfirst review of recent advances in this area. First, we provide a formal\ndefinition of domain generalization and discuss several related fields. We then\nthoroughly review the theories related to domain generalization and carefully\nanalyze the theory behind generalization. We categorize recent algorithms into\nthree classes: data manipulation, representation learning, and learning\nstrategy, and present several popular algorithms in detail for each category.\nThird, we introduce the commonly used datasets, applications, and our\nopen-sourced codebase for fair evaluation. Finally, we summarize existing\nliterature and present some potential research topics for the future.",
"authors": "Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu",
"published": "2021-03-02",
"updated": "2022-05-24",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2309.14345v2",
"title": "Bias Testing and Mitigation in LLM-based Code Generation",
"abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.",
"authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui",
"published": "2023-09-03",
"updated": "2024-01-09",
"primary_cat": "cs.SE",
"cats": [
"cs.SE",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.18276v1",
"title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)",
"abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].",
"authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters",
"published": "2024-04-28",
"updated": "2024-04-28",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"D.1; I.2"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.07688v1",
"title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity",
"abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.",
"authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah",
"published": "2024-02-12",
"updated": "2024-02-12",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.CR"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2305.19118v1",
"title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate",
"abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate",
"authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi",
"published": "2023-05-30",
"updated": "2023-05-30",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.00306v1",
"title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation",
"abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.",
"authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee",
"published": "2023-11-01",
"updated": "2023-11-01",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.17916v2",
"title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks",
"abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.",
"authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra",
"published": "2024-02-27",
"updated": "2024-03-30",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.13840v1",
"title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models",
"abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.",
"authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang",
"published": "2024-03-15",
"updated": "2024-03-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.SI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.15215v1",
"title": "Item-side Fairness of Large Language Model-based Recommendation System",
"abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.",
"authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He",
"published": "2024-02-23",
"updated": "2024-02-23",
"primary_cat": "cs.IR",
"cats": [
"cs.IR"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2307.03838v2",
"title": "RADAR: Robust AI-Text Detection via Adversarial Learning",
"abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.",
"authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho",
"published": "2023-07-07",
"updated": "2023-10-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.18140v1",
"title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models",
"abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.",
"authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith",
"published": "2023-11-29",
"updated": "2023-11-29",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.14473v1",
"title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)",
"abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.",
"authors": "Joschka Haltaufderheide, Robert Ranisch",
"published": "2024-03-21",
"updated": "2024-03-21",
"primary_cat": "cs.CY",
"cats": [
"cs.CY"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2305.01937v1",
"title": "Can Large Language Models Be an Alternative to Human Evaluations?",
"abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.",
"authors": "Cheng-Han Chiang, Hung-yi Lee",
"published": "2023-05-03",
"updated": "2023-05-03",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.HC"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2304.03728v1",
"title": "Interpretable Unified Language Checking",
"abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.",
"authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass",
"published": "2023-04-07",
"updated": "2023-04-07",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2309.03852v2",
"title": "FLM-101B: An Open LLM and How to Train It with $100K Budget",
"abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.",
"authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang",
"published": "2023-09-07",
"updated": "2023-09-17",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.11764v1",
"title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs",
"abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.",
"authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar",
"published": "2024-02-19",
"updated": "2024-02-19",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.CY",
"68T50",
"I.2.7; K.4.1"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2312.15198v2",
"title": "Do LLM Agents Exhibit Social Behavior?",
"abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.",
"authors": "Yan Leng, Yuan Yuan",
"published": "2023-12-23",
"updated": "2024-02-22",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.SI",
"econ.GN",
"q-fin.EC"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2401.00625v2",
"title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models",
"abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.",
"authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao",
"published": "2024-01-01",
"updated": "2024-01-04",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2312.15478v1",
"title": "A Group Fairness Lens for Large Language Models",
"abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.",
"authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He",
"published": "2023-12-24",
"updated": "2023-12-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.12736v1",
"title": "Large Language Model Supply Chain: A Research Agenda",
"abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.",
"authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang",
"published": "2024-04-19",
"updated": "2024-04-19",
"primary_cat": "cs.SE",
"cats": [
"cs.SE"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.13095v1",
"title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications",
"abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.",
"authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh",
"published": "2023-11-22",
"updated": "2023-11-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.14208v2",
"title": "Content Conditional Debiasing for Fair Text Embedding",
"abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.",
"authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis",
"published": "2024-02-22",
"updated": "2024-02-23",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.CY",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.01349v1",
"title": "Fairness in Large Language Models: A Taxonomic Survey",
"abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.",
"authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang",
"published": "2024-03-31",
"updated": "2024-03-31",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.04892v2",
"title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs",
"abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.",
"authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot",
"published": "2023-11-08",
"updated": "2024-01-27",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.09447v2",
"title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities",
"abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.",
"authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun",
"published": "2023-11-15",
"updated": "2024-04-02",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.00811v1",
"title": "Cognitive Bias in High-Stakes Decision-Making with LLMs",
"abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.",
"authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He",
"published": "2024-02-25",
"updated": "2024-02-25",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.07981v1",
"title": "Manipulating Large Language Models to Increase Product Visibility",
"abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.",
"authors": "Aounon Kumar, Himabindu Lakkaraju",
"published": "2024-04-11",
"updated": "2024-04-11",
"primary_cat": "cs.IR",
"cats": [
"cs.IR",
"cs.AI",
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.04489v1",
"title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning",
"abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.",
"authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell",
"published": "2024-02-07",
"updated": "2024-02-07",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CR",
"cs.CY",
"stat.ME"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2401.04057v1",
"title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems",
"abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.",
"authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam",
"published": "2024-01-08",
"updated": "2024-01-08",
"primary_cat": "cs.IR",
"cats": [
"cs.IR",
"cs.AI",
"cs.SE"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2401.11033v4",
"title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?",
"abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.",
"authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya",
"published": "2024-01-19",
"updated": "2024-04-03",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2312.06056v1",
"title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities",
"abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.",
"authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar",
"published": "2023-12-11",
"updated": "2023-12-11",
"primary_cat": "cs.SE",
"cats": [
"cs.SE",
"cs.AI",
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.06003v1",
"title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models",
"abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.",
"authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang",
"published": "2024-04-09",
"updated": "2024-04-09",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.18130v2",
"title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues",
"abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.",
"authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams",
"published": "2023-10-27",
"updated": "2023-11-07",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.HC"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.19465v1",
"title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models",
"abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.",
"authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao",
"published": "2024-02-29",
"updated": "2024-02-29",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.06852v2",
"title": "ChemLLM: A Chemical Large Language Model",
"abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem",
"authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li",
"published": "2024-02-10",
"updated": "2024-04-25",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2312.15398v1",
"title": "Fairness-Aware Structured Pruning in Transformers",
"abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.",
"authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar",
"published": "2023-12-24",
"updated": "2023-12-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CY",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2309.11653v2",
"title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents",
"abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.",
"authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li",
"published": "2023-09-20",
"updated": "2024-04-02",
"primary_cat": "cs.HC",
"cats": [
"cs.HC",
"cs.AI",
"cs.CR"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.08517v1",
"title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward",
"abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.",
"authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma",
"published": "2024-04-12",
"updated": "2024-04-12",
"primary_cat": "cs.SE",
"cats": [
"cs.SE",
"cs.AI",
"cs.CL",
"cs.CR",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.13925v1",
"title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit",
"abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.",
"authors": "Boning Zhang, Chengxi Li, Kai Fan",
"published": "2024-04-22",
"updated": "2024-04-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.15491v1",
"title": "Open Source Conversational LLMs do not know most Spanish words",
"abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.",
"authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego",
"published": "2024-03-21",
"updated": "2024-03-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2309.09397v1",
"title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings",
"abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.",
"authors": "Stephen Fitz",
"published": "2023-09-17",
"updated": "2023-09-17",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.CY",
"cs.LG",
"cs.NE"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.08656v1",
"title": "Linear Cross-document Event Coreference Resolution with X-AMR",
"abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}",
"authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin",
"published": "2024-03-25",
"updated": "2024-03-25",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.10567v3",
"title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?",
"abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.",
"authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru",
"published": "2024-02-16",
"updated": "2024-02-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.06899v4",
"title": "Flames: Benchmarking Value Alignment of LLMs in Chinese",
"abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.",
"authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin",
"published": "2023-11-12",
"updated": "2024-04-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2308.05345v3",
"title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model",
"abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.",
"authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie",
"published": "2023-08-10",
"updated": "2023-11-11",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AR"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.17553v1",
"title": "RuBia: A Russian Language Bias Detection Dataset",
"abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.",
"authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova",
"published": "2024-03-26",
"updated": "2024-03-26",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.04814v2",
"title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks",
"abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.",
"authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung",
"published": "2024-03-07",
"updated": "2024-04-10",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG",
"cs.SE"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2312.14769v3",
"title": "Large Language Model (LLM) Bias Index -- LLMBI",
"abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.",
"authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina",
"published": "2023-12-22",
"updated": "2023-12-29",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.CY",
"cs.LG",
"I.2.7"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2305.03514v3",
"title": "Can Large Language Models Transform Computational Social Science?",
"abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.",
"authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang",
"published": "2023-04-12",
"updated": "2024-02-26",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2303.01248v3",
"title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework",
"abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.",
"authors": "Haocong Rao, Cyril Leung, Chunyan Miao",
"published": "2023-03-01",
"updated": "2023-10-13",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.10199v3",
"title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting",
"abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/",
"authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi",
"published": "2024-04-16",
"updated": "2024-04-26",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.02839v1",
"title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers",
"abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.",
"authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao",
"published": "2024-03-05",
"updated": "2024-03-05",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.13343v1",
"title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)",
"abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.",
"authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li",
"published": "2023-10-20",
"updated": "2023-10-20",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2309.08836v2",
"title": "Bias and Fairness in Chatbots: An Overview",
"abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.",
"authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo",
"published": "2023-09-16",
"updated": "2023-12-10",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.CY"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.03033v1",
"title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models",
"abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.",
"authors": "Javier Gonz\u00e1lez, Aditya V. Nori",
"published": "2023-11-06",
"updated": "2023-11-06",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.16343v2",
"title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models",
"abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.",
"authors": "Xiang Chen, Xiaojun Wan",
"published": "2023-10-25",
"updated": "2024-03-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2404.03192v1",
"title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers",
"abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.",
"authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang",
"published": "2024-04-04",
"updated": "2024-04-04",
"primary_cat": "cs.IR",
"cats": [
"cs.IR",
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2308.05374v2",
"title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment",
"abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.",
"authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li",
"published": "2023-08-10",
"updated": "2024-03-21",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2305.18569v1",
"title": "Fairness of ChatGPT",
"abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.",
"authors": "Yunqi Li, Yongfeng Zhang",
"published": "2023-05-22",
"updated": "2023-05-22",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CY"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2401.00588v1",
"title": "Fairness in Serving Large Language Models",
"abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.",
"authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica",
"published": "2023-12-31",
"updated": "2023-12-31",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.LG",
"cs.PF"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2305.13862v2",
"title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models",
"abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.",
"authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto",
"published": "2023-05-23",
"updated": "2023-08-29",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.08472v1",
"title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models",
"abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.",
"authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze",
"published": "2023-11-14",
"updated": "2023-11-14",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2403.00884v2",
"title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment",
"abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.",
"authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen",
"published": "2024-03-01",
"updated": "2024-03-05",
"primary_cat": "cs.DB",
"cats": [
"cs.DB",
"cs.AI",
"cs.IR"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2308.10149v2",
"title": "A Survey on Fairness in Large Language Models",
"abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.",
"authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang",
"published": "2023-08-20",
"updated": "2024-02-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.05694v1",
"title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics",
"abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.",
"authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria",
"published": "2023-10-09",
"updated": "2023-10-09",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2308.10397v2",
"title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models",
"abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.",
"authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He",
"published": "2023-08-21",
"updated": "2023-10-27",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.15007v1",
"title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models",
"abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.",
"authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye",
"published": "2023-10-23",
"updated": "2023-10-23",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CR",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.18580v1",
"title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity",
"abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.",
"authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu",
"published": "2023-11-30",
"updated": "2023-11-30",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CR"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2405.02219v1",
"title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems",
"abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.",
"authors": "Yashar Deldjoo",
"published": "2024-05-03",
"updated": "2024-05-03",
"primary_cat": "cs.IR",
"cats": [
"cs.IR"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.14607v2",
"title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications",
"abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.",
"authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju",
"published": "2023-10-23",
"updated": "2024-04-02",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2206.13757v1",
"title": "Flexible text generation for counterfactual fairness probing",
"abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.",
"authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster",
"published": "2022-06-28",
"updated": "2022-06-28",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CY"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.18502v1",
"title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification",
"abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.",
"authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty",
"published": "2024-02-28",
"updated": "2024-02-28",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.09219v5",
"title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters",
"abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.",
"authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng",
"published": "2023-10-13",
"updated": "2023-12-01",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.11406v2",
"title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection",
"abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.",
"authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu",
"published": "2024-02-18",
"updated": "2024-02-26",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2307.11761v1",
"title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts",
"abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.",
"authors": "Yashar Deldjoo",
"published": "2023-07-14",
"updated": "2023-07-14",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2402.08189v1",
"title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs",
"abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.",
"authors": "Karthik Sreedhar, Lydia Chilton",
"published": "2024-02-13",
"updated": "2024-02-13",
"primary_cat": "cs.HC",
"cats": [
"cs.HC"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2311.07884v2",
"title": "Fair Abstractive Summarization of Diverse Perspectives",
"abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.",
"authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang",
"published": "2023-11-14",
"updated": "2024-03-30",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "LLM Fairness"
},
{
"url": "http://arxiv.org/abs/2310.08780v1",
"title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models",
"abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.",
"authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter",
"published": "2023-10-13",
"updated": "2023-10-13",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "LLM Fairness"
}
]