diff --git "a/related_53K/test_related_long_2404.16807v1.json" "b/related_53K/test_related_long_2404.16807v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.16807v1.json" @@ -0,0 +1,8623 @@ +[ + { + "url": "http://arxiv.org/abs/2404.16807v1", + "title": "Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning", + "abstract": "Generative Commonsense Reasoning (GCR) requires a model to reason about a\nsituation using commonsense knowledge, while generating coherent sentences.\nAlthough the quality of the generated sentences is crucial, the diversity of\nthe generation is equally important because it reflects the model's ability to\nuse a range of commonsense knowledge facts. Large Language Models (LLMs) have\nshown proficiency in enhancing the generation quality across various tasks\nthrough in-context learning (ICL) using given examples without the need for any\nfine-tuning. However, the diversity aspect in LLM outputs has not been\nsystematically studied before. To address this, we propose a simple method that\ndiversifies the LLM generations, while preserving their quality. Experimental\nresults on three benchmark GCR datasets show that our method achieves an ideal\nbalance between the quality and diversity. Moreover, the sentences generated by\nour proposed method can be used as training data to improve diversity in\nexisting commonsense generators.", + "authors": "Tianhui Zhang, Bei Peng, Danushka Bollegala", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Diverse Text Generation. A variety of methods have been proposed to enhance the diversity of NLG. Sampling-based decoding is an effective method to increase the generation diversity. Holtzman et al. (2019) proposed nucleus sampling to generate diverse content at the generation stage. Truncated sampling (Fan et al., 2018) prunes and then samples the tokens based on the probability distribution. Furthermore, Shen et al. (2019) proposed an MoE approach to diversify translation outputs. Moreover, incorporating external corpora in the MoE further promotes diversity, such as by using a knowledge graph (Yu et al., 2022; Hwang et al., 2023) or by a collection of retrieved sentences (Liu et al., 2023). Although LLMs have reported superior performance in numerous Natural Language Processing (NLP) tasks (Touvron et al., 2023; OpenAI, 2023b,a), to the best of our knowledge, diversifying their generations in commonsense reasoning with ICL has not been explored in prior work on GCR. In-Context Learning. Recent studies demonstrate that LLMs can exhibit robust few-shot performance on a variety of downstream tasks through ICL (Brown et al., 2020). ICL is a technique for instructing an LLM using one or more examples for a particular text generation task. The generated text is conditioned on both the input as well as the instruction prompt. Wang et al. (2023) show that in ICL, label words in the demonstration examples function as anchors, which aggregate semantic information to their word representations in the shallow (closer to the input) layers, while providing that information to the final predictions performed by the deeper (closer to the output) layers. In contrast to fine-tuning-based methods, ICL is computationally lightweight because it does not update the parameters of the LLM. Therefore, ICL is an attractive method when integrating task-specific knowledge to an LLM by simply changing the prompt and the few-shot examples (Dong et al., 2022).", + "pre_questions": [], + "main_content": "Introduction Commonsense reasoning is the ability to make logical deductions about concepts encountered in daily life, and is considered as a critical property of intelligent agents (Davis and Marcus, 2015). Concepts are mental representations of classes and are expressed using words in a language (Liu et al., 2023). Given the inputs, the GCR task requires a model to generate a high quality sentence that is grammatical and adheres to commonsense, evaluated by its similarity to a set of human-written reference sentences covering the same set of concepts (Lin et al., 2020). Often there exists multiple relationships between a given set of concepts, leading to alternative reasoning paths that take diverse view points. For example, given the four concepts dog, frisbee, throw and catch, different sentences can be generated as Dog; Catch; Frisbee; Throw A dog leaps to catch a thrown frisbee. The dog catches the frisbee when the boy throws it. A man throws away his dog's favourite frisbee expecting him to catch it in the air. A\u00a0dog catches\u00a0a\u00a0frisbee thrown\u00a0to it. A dog catches a frisbee thrown by its owner. A dog jumps in the air to catch a frisbee thrown by its owner. Figure 1: An example of diverse generated sentences sets in CommonGen (Lin et al., 2020) dataset. The generation shown at the bottom (in green ) are considered by human annotators to be more diverse than those at the top (in red ). shown in Figure 1. Although all sentences shown in Figure 1 are grammatical, the bottom set expresses diverse view points (e.g. from the dog\u2019s as well as the man\u2019s) compared to the set at the top. Apart from the generation quality, diversity is also an important factor in text generation because the low-diversity texts tend to be dull, repetitive or biased towards a particular view point (Tevet and Berant, 2021). Diversity is an important consideration in many Natural Language Generation (NLG) applications, such as story generation (Li et al., 2018), paraphrase generation (Gupta et al., 2018), and GCR (Yu et al., 2022; Liu et al., 2023). In GCR tasks, the input text often provides insufficient information to support diverse reasoning and generate multiple plausible outputs. Therefore, the diversity present in GCR task enables the exploration of different perspectives or all possible outcomes for a real-world situation. Existing methods promote diversity through special decoding strategies, such as nucleus sampling (Holtzman et al., 2019), or encoding interventions such as random noise injection (Gupta et al., 2018) or Mixture of Experts (MoE) approaches (Shen et al., 2019). We propose In-Context Diversification (ICD), a computationally-efficient and accurate method to improve the diversity in GCR, where the sentences are generated from a pre-trained LLM, and strikes arXiv:2404.16807v1 [cs.CL] 25 Apr 2024 a fine-balance between the output diversity and quality. ICD uses an ICL approach to increase the diversity of the sentences generated by an LLM, while maintaining the quality of the generation. ICD is a two-step process where it first lets an LLM to freely generate high-quality sentences that are grammatical, commonsense bearing and cover all the given input concepts. Next, ICD uses a userspecified diversity metric to evaluate the diversity of the generated sentences. If the diversity is low, ICD provides feedback to the LLM, instructing it to generate more diverse sentences considering the already generated sentences. Given that ICD is using LLMs to generate diverse sentences via ICL and without updating the parameters of the LLMs, an interesting and open question is whether an LLM can accurately judge the diversity of a given set of sentences, covering a common set of concepts. To answer this question, we conduct an experiment where we instruct GPT3.5-turbo to judge the diversity of the set of input sentences according to a five-scale grading system, and convert the predicted grades into binary judgements (i.e. diverse vs. non-diverse). We compare the LLM-assigned grades against those by a group of human annotators, and find a moderatelevel (Cohen\u2019s Kappa of 0.409) agreement between human vs. LLM judgements, demonstrating that LLMs can indeed be instructed to obtain diversity judgements for GCR tasks. We evaluate ICD on three GCR tasks/datasets: CommonGen (Lin et al., 2020), ComVE (Wang et al., 2020), and DimonGen (Liu et al., 2023). We find that our proposed ICD balances diversity and quality appropriately, improving their harmonic mean by at least 6% over that of a default baseline. Moreover, the sentences generated by ICD can be used as training data to improve diversity in a Seq2Seq model (Sutskever et al., 2014; Lewis et al., 2020), producing results that are comparable to the models that are trained on knowledge graphs or human-written text corpora (Liu et al., 2021; Fan et al., 2020; Li et al., 2021). We consider the problem of generating a set of diverse sentences that express commonsense reasoning, either by covering a set of given concepts (in CommonGen and DimonGen) or by providing an explanation for a given counterfactual statement (in ComVE). Formally, given a sequence (a set of concepts or a statement) X = {x1, . . . , xm}, the goal of GCR is to generate a set of grammatically correct and commonsense bearing sentences Y = {y1, . . . , yn}, where yi is the i-th output generated by the model with probability p(yi|X). Moreover, we require that the generated sentences {y1, . . . , yn} to be lexically as well as semantically diverse. Default Examples: Given several key words: [SRC], generate one coherent sentences using background commonsense knowledge: [TGT] Test instruction: Given several key words: [INPUT], generate one coherent sentence using background commonsense knowledge: [OUTPUT] Diversified Examples: Given several key words: [SRC], generate one coherent sentence using background commonsense knowledge: [TGT] Test instruction: Step1: Given several key words: [INPUT], generate [N] different and coherent sentences using background commonsense knowledge: [PRV] (If the diversity of [PRV] is low) Step2: You have generated the following sentences: [PRV], try to provide other reasonable sentences: [OUTPUT] (a) (b) Figure 2: An example of default and diversified prompts is shown for an instance selected from the CommonGen dataset. Here, the default prompt shown in Figure 2a is taken from Li et al. (2023). Few-shot examples are included in each prompt where [SRC] denotes the set of input concepts and [TGT] the corresponding sentences in CommonGen. For a given set of [INPUT] concepts, the LLM is then required to generate sentences at the slot [OUTPUT]. As shown in Figure 2b, ICD uses the diversified prompt, which operates in two steps. Step 1 generates a set of [N] sentences, [PRV]. We check for the diversity among the sentences in [PRV], and if it is low, we use the prompt in Step 2 to generate the final set of sentences. 3.1 Sentence Generation To explain our proposed ICD, let us consider GCR on CommonGen, where we must generate a set of sentences Y, such that each sentence contains all of the input concepts X as shown in Figure 2a. Given an LLM, we can design a prompt that contains a task-specific instruction and one or more examples containing the input concepts (denoted by [SRC] in Figure 2) and the corresponding human-written sentences containing all given input concepts (denoted by [TGT]) to instruct the LLM to generate output sentences Y (denoted by [OUTPUT]) for a given set of input concepts X (denoted by [INPUT]). We refer to a prompt of this nature as a default prompt, and the corresponding set of generated sentences by Sdef. Note that the default prompt does not necessarily guarantee that the generated set of sentences will be diverse and an LLM could return sentences that are highly similar to each other. To address this issue, we propose a diversified prompt as shown in Figure 2b. Specifically, the diversified prompt operates in two steps. In Step 1, we require that the LLM generate N sentences that are different, in addition to being coherent and commonsense bearing. Next, we use a suitable diversity metric to evaluate the level of diversity among the generated set of sentences. If the diversity of the generated senAlgorithm 1 In-Context Diversification (ICD) Input: Generated sets of sentences Sdef and Sdiv, respectively from default and diversified prompts, the number of desired output sentences N, and a diversity metric f. Output: Output set of sentences S\u2217 S\u2217\u2190\u2205 \u03b1 \u21900 for S \u2208(Sdef \u222aSdiv) do if (|S| == N) \u2227(f(S) \u2265\u03b1) then \u03b1 \u2190f(S) S\u2217\u2190S end if end for return S\u2217 tences is low, in Step 2, we show those sentences to the LLM and instruct it to generate sentences that are different to those. As the criteria for triggering Step 2, we check whether the exact same sentence has been generated multiple times by the LLM during Step 1. The final set of generated sentences is denoted by Sdiv. 3.2 Diversity-based Sampling Because of the limited availability of humanwritten reference sentences for evaluating GCR models, there exists a trade-off between quality vs. diversity when generating sentences for GCR tasks.1 Simply maximising for diversity often leads to generations that do not cover the input concepts in a natural way. For example, a randomly selected set of sentences would be highly diverse, yet unlikely to capture the input concept sets. On the other hand, if we force an LLM to generate sentences that contain all of the input concepts, it might find difficult to generate semantically diverse sentences and resort to trivial lexical or syntactic diversity tricks such as morphological inflections or word-order permutations. To address this issue, we propose a diversitybased sampling method shown in Algorithm 1. Consider that the default prompt provides a set Sdef of sentences that have not been optimised for diversity (likely to have a higher quality), while on the other hand the diversified prompt provides a set Sdiv of sentences that are further refined for diversity (likely to have a higher diversity). We wish to find a set of sentences that simultaneously satisfies the following criteria: (a) must contain exactly N sentences, as specified by the user, and (b) must have a high diversity score, measured using a user-specified diversity metric f(\u2208R\u22650). We formalise this as a subset search problem, where 1This trade-off is further empirically verified in \u00a7 5.1. we compute the union Sdef \u222aSdiv and search for the subset S\u2217that jointly satisfies those criteria following the procedure detailed in Algorithm 1. Although the total number of subsets of size N is \u0000|Sdef\u222aSdiv| N \u0001 , it is sufficiently small for the values of N in our GCR tasks, which makes this subset search fast in practice. 4 Experimental Settings 4.1 Tasks and Datasets We evaluate ICD on three GCR tasks as follows. Constrained Commonsense Reasoning: In CommonGen (Lin et al., 2020) benchmark, a model is required to generate a sentence covering a given set of concepts such that background commonsense knowledge associated with the input concepts is reflected. This dataset contains 35K distinct concept sets (train = 32651, dev = 993, and test = 1497) with corresponding human written sentences (train = 67389, dev = 4018, and test = 6042). Each instance contains on average 3-5 input concepts. Commonsense Explanation Reasoning: ComVE (Wang et al., 2020) is part of the SemEval 2020 commonsense validation task, where for a given counterfactual statement, a model is required to generate an explanation providing a reason describing why the statement is nonsensical. This dataset contains 10K (train = 8532, dev = 476, and test = 992) examples, where each example contains three reference outputs. Diversified GCR: DimonGen (Liu et al., 2023) involves generating diverse sentences that describe the relationships between two given concepts. It is a challenging task because it requires generating reasonable scenarios for a given pair of concepts without any context. This dataset contains 17109 instances (train = 15263, dev = 665, test = 1181), where each instance has 3-5 references. 4.2 Evaluation Metrics We measure both the quality and diversity of the sentences generated by models using the metrics described next. 4.2.1 Quality Metrics We compare a generated sentence by a model against a set of human-written references to evaluate the quality of the generation using several metrics: BLEU (Papineni et al., 2002) measures n-gram precision against human reference texts, SPICE (Anderson et al., 2016) measures the semantic propositional overlap between two sentences, and BERTScore (Zhang et al., 2020) uses contextualised word embeddings to measure the semantic similarity between tokens in two sentences. In alignment with prior works (Yu et al., 2022; Liu et al., 2023; Hwang et al., 2023), when multiple candidate sentences are generated for a test case, we select the highest-scoring candidate for evaluating quality. 4.2.2 Diversity Metrics Pairwise Diversity: We use self-BLEU (Zhu et al., 2018) to measure n-gram overlap among sentences within each generated set. The metric computes the average sentence-level similarity between all pairwise combinations of the generations in the generation set. Note that unlike BLEU, self-BLEU does not require human generated references for measuring diversity. We use self-BLEU3/4 (corresponding to n = 3 and 4) in our experiment. Lower self-BLEU scores indicate higher lexical diversity. Corpus Diversity: To measure the variety within our generated text corpus, we employ Distinctk (Li et al., 2016), which calculates the ratio of unique k-grams to the total number of k-grams. This metric is particularly useful for adjusting the bias of LLMs toward generating longer sequences, ensuring that diversity is not artificially inflated by the sentence length. Additionally, we use Entropyk to evaluate the distributional uniformity of kgram occurrences, considering word frequencies for a more nuanced view of diversity. Higher Distinct-k and Entropy-k scores indicate higher diversity. Semantic Diversity: All previously described diversity metrics are limited to evaluating lexical diversity. To measure diversity at a semantic level, we propose self-cosSim, which is the average pairwise cosine similarity between generated sentences, computed using sentence embeddings obtained from SimCSE (Gao et al., 2021). Likewise, we define the self-BERTScore as a diversity metric that averages the BERTScores for all generated sentence pairs. Lower self-cosSim and self-BERTScore values indicate higher semantic diversity. 4.2.3 Combined Metrics We would prefer GCR models that have both high quality and high diversity. To incoporate both aspects into a single metric, we compute the Harmonic Mean between (a) the self-BLEU-4 as the diversity metric, and (b) BERTScore as the quality metric. As discussed in \u00a7 3.2, there exists a tradeoff between quality and diversity in GCR. Therefore, the harmonic mean is suitable when averaging quality and diversity scores.2 Alihosseini et al. (2019) proposed Fr\u00b4 echet BERT Distance (FBD) as a joint metric for simultaneously measuring both the quality and diversity of NLG. FBD is inspired by the Fr\u00b4 echet Inception Distance (FID), proposed by Heusel et al. (2017), for measuring the quality of image generation. Specifically, FBD computes the pooler output3 of a sentence as its embedding (Devlin et al., 2019) and represents a set of sentences using the mean vector and the covariance matrix computed from their sentence embeddings. Next, Wasserstein-2 distance is computed between the set of reference sentences and the set of generated sentences, which captures both the distance between the means as well as variance in the distributions. Lower FBD scores indicate high combined performance. 4.3 Implementation Details We use GPT3.5-turbo and Vicuna-13b-v1.54 as LLMs with temperature set to 1.0 in our experiments. By using two LLMs with significantly differing number of parameters and by including, Vicuna, an open source LLM, we plan to improve the reliability and reproducibility of our results. Max response length is set to 25 tokens. The inference times for CommonGen, ComVE and DimonGen datasets are respectively 5-6, 2-3 and 1-2 hours. The cost of running ICD with GPT3.5-turbo are ca. $6, $4 and $4 respectively for CommonGen, ComVE and DimonGen datasets. On the other hand, the costs of fine-tuning on GPT3.5-turbo are much higher at $58.8 for CommonGen, $24.7 for ComVE and $32.0 for DimonGen. Moreover, fine-tuning with LoRA (Hu et al., 2022) with rank of 8 and alpha of 16 on Vicuna takes ca. 34 hours. We use BART-large5 for MoE-based models. We use the GPT3.5-turbo to generate sentences for the CommonGen train/dev/test sets using the de2We use self-BLEU-4 for diversity and BERTScore for quality in Harmonic Mean due to their reliability shown in preliminary evaluations. Other metric pairs are in Appendix D. 3The last layer\u2019s hidden-state of the first token of the sequence is further processed by a Linear layer and a Tanh activation function. 4https://huggingface.co/lmsys/vicuna-13b-v1.5 5https://huggingface.co/facebook/bart-large fault, diversified and for ICD. For model training, we use the Adam optimiser (Kingma and Ba, 2015) with a batch size of 64, a learning rate of 3e-5 and a beam size of 5. All of the MoE-based models are trained for 20 epochs and required to generate k = 3 sentences. All experiments, except with GPT3.5-turbo, are conducted on a single RTX A6000 GPU. 5 Results and Discussion 5.1 Commonsense Generation We compare the commonsense generations made by ICD against those using the default and diversified prompts. For this purpose, we use GPT3.5-turbo as the LLM and use the same 10 few-shot examples in all prompts for ICL. Further templates of the default and diversified prompts used for each task are given in Appendix E. To assess the impact of ICL, we compare against finetune method, wherein GPT3.5-turbo is fine-tuned on the entire training set in each dataset. Specifically, we use multiple human-written sentences, available in the training data for the three datasets to separately fine-tune the models for each task. It is noteworthy that the fine-tune method uses a substantially larger dataset for training (e.g., 67,389 sentences from CommonGen) compared to the 10 examples used by the ICL-based approaches. We use self-BLEU-3 as the diversity metric f in Algorithm 1 for ICD in this evaluation. The outcomes, presented in Table 1, highlight the diversity and quality metrics of these methods across the CommonGen, ConVE, and DimonGen datasets. Additionally, a human baseline is introduced to evaluate the diversity of sentences written by humans, where we pair-wise compare the human-written sentences for each input in the instances in the benchmark datasets using diversity metrics. Note that however, the human baseline must not be considered as an upper-bound for diversity because there are only a smaller number of human-written sentences per instance in the benchmark datasets. From Table 1, we see that fine-tune generates sentences that have high semantic and corpus diversity, and outperforms the human baseline. However, recall that fine-tune requires a much larger training set and is computationally costly compared to all ICL-based methods. Moreover, we see that ICD can strike a good balance between quality and diversity in the sentences generated. Among the ICL-based methods, ICD achieves the best diSemantic Diversity \u21d3 Corpus Diversity \u21d1 Pairwise Diversity \u21d3 Quality \u21d1 Combined self-cosSim self-BERTScore Entropy-4 Distinct-4 self-BLEU-3 self-BLEU-4 BLEU-3 BLEU-4 SPICE BERTScore Harmonic \u21d1 FBD \u21d3 CommonGen Human 67.3 60.6 10.9 91.0 25.4 17.6 Fine-tune 64.7 55.9 11.4 91.1 26.9 17.9 41.2 32.1 30.3 64.2 72.1 51.9 default 93.3 88.7 10.2 53.7 77.2 72.4 50.8 40.9 30.1 70.4 39.6 60.2 diversified 85.2 69.8 11.0 83.7 44.4 34.9 44.3 34.6 28.5 65.0 65.4 53.9 ICD 83.5 66.2 11.0 88.5 31.0 21.0 47.4 37.7 29.1 67.4 72.7 51.8 ComVE Human 62.7 47.0 9.6 96.1 12.4 8.1 Fine-tune 59.8 42.6 9.8 95.2 13.4 10.3 27.4 19.4 33.1 53.7 67.2 47.6 default 83.9 73.5 9.6 74.3 50.8 45.2 27.5 19.7 36.2 55.1 54.9 50.9 diversified 76.1 56.5 9.7 88.0 23.2 16.5 30.5 21.8 35.8 56.5 67.4 47.9 ICD 72.5 51.1 9.8 90.1 13.7 8.7 29.0 20.8 36.1 55.5 69.0 48.7 DimonGen Human 56.8 47.0 10.1 85.6 14.7 8.7 Fine-tune 43.4 33 10.4 98.7 6.8 3.4 17.7 10.7 15.5 42 58.5 51.6 default 75.7 71.3 10 83.2 43.4 37.3 15.9 9.5 16.4 44.5 52.1 68.2 diversified 57.1 46.9 10.5 95.9 11.2 6.5 11.4 6.4 15.2 39.9 55.9 69.0 ICD 56.7 45.7 10.4 96.3 6.5 3.5 13.2 7.6 15.4 41.7 58.2 68.0 Table 1: Diversity and quality scores on CommonGen, ComVE and DimonGen with GPT3.5-turbo LLM. Best results on each task for each metric are shown in italics, while the best performing ICL results are shown in bold. versity scores on all diversity metrics in all three datasets. It also exhibits higher diversity compared against the human-written references. Moreover, ICD outperforms default and diversified according to the Combined metrics. ICD also achieves a Harmonic Mean comparable to that of the fine-tune baseline. Although default reports the best quality scores, it has low diversity, and is consistently outperformed by diversified and ICD on diversity metrics. On the other hand, diversified generally scores lower on the quality metrics. Compared to default and diversified, ICD enhances generation diversity while maintaining a satisfactory level of quality. ICD is also more stable to the sampling method such as temperature than fine-tune, as shown in Appendix B. Note that fine-tune is not an ICL setting (the focus of this paper) and is included only as a baseline to demonstrate the level of performance that can be achieved by finetuning on a much larger dataset. Despite this, ICD outperforms fine-tune on the Pairwise Diversity in all three datasets, and Combined metrics in the CommonGen dataset. As an open source alternative LLM to GPT3.5-turbo, we repeat this evaluation with Vicuna-13b (Zheng et al., 2023) in Table 2. The same 10 few-shot examples as used with GPT3.5-turbo are used in this experiment for the ICL-based methods. Full table on three datasets are shown in Appendix C. Table 2 reconfirms ICD\u2019s ability to balance both quality and diversity according to the Combined metrics (i.e. Harmonic Mean and FBD) on this dataset. Interestingly, we see that Method SCS \u21d3 SBS \u21d3 E-4\u21d1 D-4\u21d1 SB-3\u21d3 BLEU-3\u21d1 SPICE\u21d1 HM \u21d1 FBD \u21d3 Fine-tune 59.6 49.9 11.4 93.3 22.8 35.8 27.6 69.9 52.4 Default 82.2 73.8 10.9 74.9 52.9 44.6 29.1 60.2 56.2 Diversified 59.1 53.3 11.3 91.3 23.6 32.6 24.3 68.6 53.2 ICD 59.3 49.8 11.3 93.7 11.3 34.2 25.5 73.4 51.0 Table 2: GCR on CommonGen using Vicuna-13b. ICD uses self-BLEU-3. Here, SCS: self-CosSim, SBS: selfBERTScore, E-4: Entropy-4, D-4: Distinct-4, SB-3: self-BLEU3, HM: Harmonic Mean. Best results for each metric are shown in italics, while the best performing ICL results are shown in bold. Method SCS \u21d3 SBS \u21d3 E-4\u21d1 D-4\u21d1 SB-3\u21d3 BLEU-3\u21d1 SPICE\u21d1 HM \u21d1 FBD \u21d3 self-BLEU-3 83.5 66.2 11.0 88.5 31.0 47.4 29.1 72.7 51.8 self-CosSim 81.0 70.1 10.9 82.5 44.5 47.6 29.3 65.7 51.8 self-BERTScore 83.1 62.8 11.0 87.0 36.3 46.5 28.9 69.6 51.8 Table 3: Comparing the effect of using different diversity metrics, f, in Algorithm 1 for ICD. We use GPT3.5-turbo as the LLM and the best results on CommonGen dataset are in bold. Here, SCS: self-CosSim, SBS: self-BERTScore, E-4: Entropy-4, D-4: Distinct-4, SB-3: self-BLEU3, HM: Harmonic Mean. methods that use Vicuna-13b to be more diverse compared to those that use GPT3.5-turbo, while the latter showing better generation quality. In Table 3, we use different diversity metrics as f in Algorithm 1 to study the effect on text generation of ICD. We see that self-BLUE-3 and self-CosSim perform similarly across the quality metrics. SelfBERTScore shows a slightly lower quality (BLEU3 and SPICE), which indicates some level of overfitting to the diversity metric being used. According to the combined metrics, any of those diversity metrics can be used with ICD to obtain comparable performance. Semantic Diversity \u21d3 Corpus Diversity \u21d1 Pairwise Diversity \u21d3 Quality \u21d1 Combined self-cosSim self-BERTScore Entropy-4 Distinct-4 self-BLEU-3 self-BLEU-4 BLEU-3 BLEU-4 SPICE BERTScore Harmonic Mean \u21d1 FBD \u21d3 KG-BART 42.1 30.9 32.7 EKI-BART 46.0 36.1 33.4 KFCNet-w/o FC 50.2 42.0 35.9 KFCNet 57.3 51.5 39.1 MoE 89.3 81.9 9.7 61.6 63.1 56.6 49.0 38.5 33.5 70.6 53.8 61.7 MoKGE 88.7 80.6 9.9 65.2 60.4 53.6 48.8 38.4 33.1 70.3 55.9 60.8 default+MoE 90.8 84.2 9.7 61.2 65.6 58.8 51.8 41.3 34.7 73.1 52.7 61.9 diversified+MoE 85.3 79.9 9.8 63.2 58.3 52.6 51.4 41.4 34.6 71.6 57.0 54.5 ICD+MoE 90.4 82.3 9.8 64.9 58.4 50.5 53.2 43.1 35.4 73.8 59.3 62.5 Table 4: Downstream evaluation of the LLM-generated sentences. Top block methods use human-generated resources for training, while the ones in the bottom block are trained on LLM-generated sentences. MoE approaches are shown in the middle block and bottom block. BART-large is used as the generator for MoE-based methods. Best results for each metric are shown in bold, while the best performing MoE for quality is shown in underline. Figure 3: Human vs. GPT3.5 diversity ratings for randomly sampled sets of sentences generated by ICD. Cohen\u2019s \u03ba = 0.409 indicates a moderate agreement. 5.2 Downstream Evaluation The experiments presented in \u00a7 5.1 show the ability of our proposed ICD to generate diverse and commonsense bearing sentences. Therefore, an important question with practical implications is whether we can use the sentences generated by ICD as additional training data to improve both diversity and quality of previously proposed models on the GCR task, which could be seen as a downstream (extrinsic) evaluation. For this purpose we select the MoE (Shen et al., 2019), which diversifies the generation by selecting outputs from a mixture of experts. Each expert is assigned a randomly generated sequence of tokens, which is used as a prefix for all inputs sent to that expert. For each input, an expert is selected according to the value of a latent variable, which is trained using the hard-EM algorithm. We follow Liu et al. (2023) and train three experts that retrieve sentences from the collection of sentences generated by ICD for concept sets in the CommonGen train split (210846 sentences in total). We use BART-large (Lewis et al., 2020) as the base model, which has shown to produce high quality commonsense generations (Zhang et al., 2023) as the generator for all experts (see Appendix A for further details). We denote this method by ICD+MoE. As baselines for comparisons, we repeat the above process using the sentences generated by default and diversified, which we denote respectively as default+MoE and diversified+MoE in Table 4. Moreover, we compare the performance against two previously proposed MoE models: MoE (Shen et al., 2019) and MoKGE (Yu et al., 2022). MoE relies solely on the base model, whereas MoKGE requires each expert to use different sets of concepts from the ConceptNet (Speer et al., 2017) knowledge graph (KG). Because Yu et al. (2022) do not evaluate their MoKGE method on CommonGen, we ran their original implementation6 on CommonGen and report results in Table 4. All previously proposed GCR methods are exclusively trained using human-created data (e.g. sentences written by human and/or manually compiled KGs such as ConceptNet), whereas the methods described thus far in this section are trained on sentences generated by an LLM (GPT3.5-turbo). Therefore, to evaluate the feasibility of using LLMgenerated sentences for training GCR models, we include the following previously proposed GCR models that are trained using a combination of corpora and KGs: KG-BART (Liu et al., 2021),EKIBART (Fan et al., 2020) and KFCNet (Li et al., 2021). For KFCNet, we present its two results \u2013 KFCNet w/o FC, which relies only on sentences including the input concepts, without further processing, and KFCNet, which additionally ranks candidates and adds contrastive modules for the encoder and the decoder (Li et al., 2021). However, note that those methods do not consider diversifica6https://github.com/DM2-ND/MoKGE Human: \u2022 The group will use the tool to make a piece of art out of metal. \u2022 I use a tool to cut a piece of metal out of the car. \u2022 The man used a piece of metal and the tools. Default: \u2022 A piece of metal is being used as a tool. \u2022 A piece of metal was used as a tool in the construction project. \u2022 A metal tool is being used to shape a piece of metal. ICD: \u2022 The piece of metal is essential for any handyman's toolkit. \u2022 The metal tool is a useful piece for working with metal. \u2022 With the right tools, any piece of metal can be transformed into something useful. CommonGen: Input: (piece, use, tool, metal) Human: \u2022 No one can digest electronic goods. \u2022 Electronic products must not be eaten. \u2022 You would die if you ate electronics. Default: \u2022 Electronic goods are not edible and are not meant for consumption. \u2022 Electronic goods are not edible and cannot be consumed as food. \u2022 Electronic goods are not edible and are meant for functional use rather than consumption. ICD: \u2022 Eating electronic goods can damage the digestive system and cause serious health issues. \u2022 It is not healthy or safe to eat electronic goods as they are made up of toxic materials. \u2022 Electronic goods are not edible and cannot be consumed as food. ComVE: Input: My friend like to eat electronic goods. Figure 4: Sentences generated by default prompt and ICD against those by humans on CommonGen and ComVE test instances. ICD generates more diverse and high quality sentences than default. tion, and do not report performance using diversity metrics. Therefore, we report only their published results for generation quality in Table 4. From Table 4 we see that diversified+MoE always outperforms the original MoE in all diversity metrics, which shows that sentences generated from LLMs can be used to diversify MoE-based GCR. ICD+MoE closely matches the performance of diversified+MoE on diversity metrics, while consistently outperforming both diversified+MoE and default+MoE on quality metrics. In particular, the quality metrics reported by ICD+MoE (underlined in Table 4) are competitive against those obtained by the models that are trained on human-compiled resources (in the top block), except against KFCNet. This finding hints at potential improvement gains for GCR by using hybrid training resources that combine both human-compiled and LLM-generated data, which we highlight as an interesting future research direction. 5.3 Diversity-Awareness of LLMs Given that we use LLMs to produce diverse generations via ICL, it remains an open question whether an LLM would agree with humans on the diversity of a given set of sentences. To answer this question, we use randomly selected 210 sentences (35 sets, each containing 6 sentences) generated by ICD (using self-BLEU-3 as the diversity metric) for the input concept sets in the CommonGen dataset. We instruct GPT3.5-turbo to rate the diversity of a given set of sentences according to five diversity ratings 1-5 with 1 being highly similar, while 5 being highly diverse.7 We provide the same instruction as the annotation guidelines for eight 7Detailed prompt templates are shown in Appendix E. human-annotators, who are graduate students in NLP. To reduce the subjective variability in human judgements, we average and then normalise the ratings following the Likert scale. In Figure 3, we plot the GPT-assigned ratings against those by humans. We further split the ratings into high vs. low diversity ratings depending on whether the rating is greater or lesser than 3. The majority of the data points are distributed along the diagonal quadrants and a Cohen\u2019s Kappa of 0.409 indicating a moderate level of agreement between GPT and human ratings for diversity. The generated sentences using the default prompt, ICD and the human references in CommonGen and ComVE datasets for a single test instance are shown in Figure 4. From Figure 4 we see that the sentences generated using the default prompt often results in significant token overlap, thereby lowering the diversity. On the other hand, ICD generates both lexically and semantically diverse sentences, covering the diverse viewpoints in the human references. 6 Conclusion We proposed, ICD, an ICL-based method for achieving the optimal balance between diversity and quality in text generation via LLMs. Our experiments, conducted on three GCR tasks, demonstrate that ICD significantly improves the diversity without substantially compromising the quality. Furthermore, we found that by training on the sentences generated by ICD, we can improve diversity in previously proposed GCR methods. 7 Limitations This study primarily focuses on the generation of English sentences using pre-trained LLMs, a limitation shaped by the datasets we employed. Specifically, we used the ComVE (Wang et al., 2020), CommonGen (Lin et al., 2020) and DimonGen (Liu et al., 2023) datasets, which are well-regarded for evaluating diversified commonsense reasoning in English. Therefore, our evaluation of the generation quality was limited to English, which is a morphologically limited language. Future research could expand this scope to include multilingual pretrained models, thereby encompassing a broader linguistic spectrum. Our approach is primarily geared towards optimizing the trade-off between diversity and quality in text generation. Consequently, we maintained consistent default instructions across all experiments, adopting the standard commonsense generation prompts used in Li et al. (2023) as our default instructions. We conducted our experiments using both a closed model (i.e. GPT3.5-turbo) as well as an open-source one (i.e. Vicuna-13b-v1.5) to promote the reproducibility of our results, which are reported using multiple public available benchmarks. However, there exist many other LLMs with varying numbers of parameters and trained on different corpora. Therefore, we consider it is important to evaluate our proposed method on a broad range of LLMs to verify the generalisability of our proposed method. However, conducting such a broad analysis can be computationally costly and expensive. For example, although GPT-4 is known to have superior text generation capabilities, it incurs substantially greater costs (being 30 times more expensive than GPT3.5-turbo at the current pricing). Nevertheless, ICD is adaptable and could be extended to other LLMs. 8 Ethical Considerations In this work, we did not create or release any manually annotated data. Our work is based on the publicly available datasets, CommonGen, ComVE, and DimonGen. To the best of our knowledge, no ethical issues have been reported for those datasets. Therefore, we do not foresee any data-related ethical issues arising from our work. However, LLMs are known to generate responses that may reflect societal biases and potentially harmful content. We have not verified whether the GPT3.5-turbo and Vicuna-13b LLMs that we use in our experiments have similar problems. Therefore, it is important to test on existing benchmarks for social biases and harmful generations before the proposed method is deployed to diversify existing GCR methods used by human users. To elicit human judgements of diversity for the sentences generated by ICD, we use annotators who are familiar with working with LLMs. It is possible that their subjective (and possibly biased) viewpoints might have influenced the ratings provided. Therefore, it will be important to conduct the evaluation involving a group of annotators with different backgrounds to validate the findings reported in this analysis.", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2404.12452v1", + "title": "Characterizing LLM Abstention Behavior in Science QA with Context Perturbations", + "abstract": "The correct model response in the face of uncertainty is to abstain from\nanswering a question so as not to mislead the user. In this work, we study the\nability of LLMs to abstain from answering context-dependent science questions\nwhen provided insufficient or incorrect context. We probe model sensitivity in\nseveral settings: removing gold context, replacing gold context with irrelevant\ncontext, and providing additional context beyond what is given. In experiments\non four QA datasets with four LLMs, we show that performance varies greatly\nacross models, across the type of context provided, and also by question type;\nin particular, many LLMs seem unable to abstain from answering boolean\nquestions using standard QA prompts. Our analysis also highlights the\nunexpected impact of abstention performance on QA task accuracy.\nCounter-intuitively, in some settings, replacing gold context with irrelevant\ncontext or adding irrelevant context to gold context can improve abstention\nperformance in a way that results in improvements in task performance. Our\nresults imply that changes are needed in QA dataset design and evaluation to\nmore effectively assess the correctness and downstream impacts of model\nabstention.", + "authors": "Bingbing Wen, Bill Howe, Lucy Lu Wang", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Prompting LLMs for Science QA The few-shot capability of LLMs have been applied successfully to knowledge-intensive tasks like question answering (Wei et al., 2021; Chowdhery et al., 2022; Nori et al., 2023). Prompting strategies such as chainof-thought (Wei et al., 2022), least-to-most (Zhou et al., 2022), and others (Kojima et al., 2022; Wang et al., 2022) significantly improve LLMs\u2019 zero or few-shot abilities on diverse QA benchmarks in the general (Rajpurkar et al., 2018; Yang et al., 2015) and scientific (Taylor et al., 2022; Pereira et al., 2023) domains. Pereira et al. (2023) adopt retrieve-than-read on the QASPER dataset (Dasigi et al., 2021), showing that current retrievers are the main bottleneck and readers (LLMs) are already performing at the human level. Inspired by this finding, we study how irrelevant or incorrect contexts (mimicking retrieval errors) can impact LLM performance on context-intensive QA tasks. LLMs and Abstention in QA Several works have shown that LLMs become easier to steer with natural language prompts either as they become larger (Mishra et al., 2022a) or as they are exposed to more instruction tuning (Mishra et al., 2022b; Chung et al., 2022). Liao et al. (2022) introduced a prompt tuning strategy to improve performance on unanswerable questions, by mapping questions onto their specific templates. Other work tried to direct models toward abstention through data augmentation (Zhu et al., 2019). Asai and Choi (2021) provided an in-depth analysis of LMs\u2019 abstention ability, identifying paragraph selection and answerability prediction as two areas for improvement. Recent work introduced new datasets to study whether LLMs know what they do not know (Yin et al., 2023; Amayuelas et al., 2023), while Slobodkin et al. (2023) demonstrated that differences in LLM hidden states identify the boundary between known and unknown. In this work, we systematically characterize LLM abstention capabilities on context-dependent QA tasks. Context Perturbation Prior work studying input perturbations for NLP tasks include approaches such as model-agnostic input transformations (Liang et al., 2022; Ravichander et al., 2022; Giorgi et al., 2022) and adversarial example generation (Jia and Liang, 2017; Wang et al., 2021). Liang et al. (2022) use semantics-preserving and semantics-altering perturbations in their robustness evaluation of LLMs. Pretrained language models can be negatively impacted by irrelevant context (Chowdhery et al., 2022; Liang et al., 2022), e.g., Shi et al. (2023) injected irrelevant numerical context for the MathQA dataset, after which ChatGPT performance dramatically decreased. However, Liang et al. (2022) evaluated T5 (Raffel et al., 2020) and PaLM (Chowdhery et al., 2022), demonDataset Context length (words) Unans. proportion Answer types Test set size SQuAD2 128 0.5 Ext 11873 PubmedQA 204 0.1 Bool 500 BioASQ 221 0.0 Bool 140 QASPER 149 0.1 Ext/Abs/Bool 1451 Table 1: QA Dataset statistics. strating that finetuning these models with counterfactual and irrelevant contexts can improve model robustness to noisy context. In our framework, we leverage context perturbations to investigate the LLM abstention behavior for science QA, finding occasional counter-intuitive interactions between abstention and task performance.", + "pre_questions": [], + "main_content": "Introduction Question-answering (QA) in scientific settings is typically defined as a context-dependent task, where models answer questions based on provided context or relevant context it identifies. When the provided or retrieved context is itself unreliable or inconsistent, however, the correct model response should be to abstain from answering. Prior work has studied the abstention capabilities of LLMs (Yin et al., 2023; Amayuelas et al., 2023) and proposed approaches to improve abstention when presented with insufficient context (Zhou 1Code will be publicly available at https://github. com/bbwen/llm_scienceqa. Prompting LLMs Yes Random Context Given Context Random Context No Context Given Context Yes Unanswerable Yes Noisy Context Context Perturbation Question:\u00a0 Do they evaluate which compression method yields the most gains? Paragraphs from paper A Removed the context Paragraphs from paper B Figure 1: Our framework to probe the context sensitivity of LLMs for science QA. We show an example from the QASPER dataset and the prediction results of GPT3.5 under different context perturbation settings. While the model fails to abstain when context is removed, it abstains appropriately when random context is provided. et al., 2023; Slobodkin et al., 2023), but these approaches focus on general domain questionanswering (e.g. SQuAD2) against a narrow range of models (e.g. ChatGPT). As we will show in our experiments (\u00a75.2), general domain settings produce highly divergent results from scientific QA settings, and different model architectures also exhibit differing abilities to abstain. Further, as new LLMs continue to be developed and released, we need an extensible way to measure their ability to abstain from answering questions when provided context is irrelevant or uncertain. To address these challenges, we introduce a framework to assess LLM abstention in science question-answering by removing, replacing, and augmenting provided contexts to control the answerability of questions. Using this framework, we probe abstention performance of four LLMs (LLama2, Vicuna, Flan-T5, and GPT3.5) on arXiv:2404.12452v1 [cs.CL] 18 Apr 2024 one general domain QA dataset (SQuAD2) and three scientific QA datasets (PubmedQA, BioASQ, QASPER). Several of these datasets include unanswerable questions by design, allowing us to analyze interactions between question answerability and noisy context. Our work aims to answer the following: (i) How well do LLMs abstain from answering questions when the correct context is not provided? (ii) How do context perturbations impact task performance and abstention performance? And (iii) How does question type impact task performance and abstention performance? We summarize our contributions below: \u2022 We introduce a framework to study LLM abstention for science QA. Specifically, we probe models\u2019 ability to abstain from answering questions when the correct context is not provided, and how abstention is impacted by context perturbations and question type. \u2022 Using this framework, we investigate the task and abstention performance of four LLMs on four QA datasets, ranging from general-domain factoid QA to context-sensitive, document-based science QA (\u00a75). Our results show that no model consistently abstains in all settings where abstention is expected (unanswerable questions and no context/random context settings), though some models demonstrate a stronger ability to abstain for more context-dependent QA tasks, and instruction-tuned models are generally better at following abstention instructions. \u2022 We investigate the impacts of context perturbations (\u00a75.1) and question type (\u00a75.2) on task accuracy and abstention ability. Substituting or augmenting context with random irrelevant context consistently facilitates better abstention performance across models and datasets, which can reflect as a counter-intuitive improvement in task performance. We also find that yes-no questions tend to interfere with all models\u2019 ability to abstain relative to other question types. We conduct experiments on four QA datasets: SQuAD2 (Rajpurkar et al., 2018), PubmedQA (Jin et al., 2019), BioASQ (Nentidis et al., 2021), and QASPER (Dasigi et al., 2021). Dataset statistics are provided in Table 1. These datasets span general and science domains, and include extractive, abstractive, and boolean questions. \u2022 SQuAD2 is a general-domain reading comprehension QA dataset. Answer contexts are extracted from Wikipedia. \u2022 PubmedQA is a biomedical QA dataset. Questions are automatically derived from PubMed paper titles and are answered from the conclusion sections of the corresponding abstracts. All questions can be answered Yes/No/Maybe. \u2022 BioASQ includes Yes/No questions that are formulated by biomedical experts, reflecting reallife information needs encountered during their work. Answers are provided by medical experts from paper abstracts. \u2022 QASPER is a full document science QA dataset. Questions are written by domain experts and answers are annotated from the full text of associated computer science papers. Questions can be boolean, extractive, or abstractive, and multiple answers may be provided for each question. Unanswerable questions Three of these datasets contain unanswerable questions (proportions in Table 1, examples in Table 2). SQuAD2 introduced unanswerable questions in machine reading comprehension; these unanswerable questions were curated by altering questions through negation, antonym swaps, entity swaps, mutual exclusion, impossible conditions, and other ways which make it such that the context paragraph no longer implies any answer. In SQuAD2, \u201cunanswerable\u201d questions imply irrelevant context passages. For PubmedQA, questions can be answered \u201cyes\u201d, \u201cno\u201d, or \u201cmaybe\u201d, and we interpret \u201cmaybe\u201d as \u201cunanswerable\u201d; therefore, \u201cunanswerable\u201d in PubmedQA can be interpreted as answers with high uncertainty based on the given context. For QASPER, \u201cunanswerable\u201d questions are expert-labeled, and mean that no answer is available in the given document. 4 Framework We describe our framework in terms of prompting strategies, context perturbation methods, the choice of models and handling of model output, and evaluation metrics. 4.1 Prompting strategies We adopt prompting templates that achieved the best performance based on recent work (Pereira et al., 2023). We refer to these templates as constrained prompts since answer constraints (e.g., \u201cAnswer \u2018Yes\u2019 or \u2018No\u2019 for boolean questions\u201d) are added to achieve better task performance. For datasets that do not have boolean questions (e.g., SQuAD2), we do not include boolean answer constraints. The example prompt used for the QASPER dataset is the following: Create an Answer to the Question using following documents. Pay attention to only answer \u201cYes\u201d or \u201cNo\u201d for boolean questions. Answer \u201cUnanswerable\u201d when you are not sure about the answer. Context: {c} Question: {q} Answer: Prompts for other datasets are in Appendix A. We conduct zero-shot experiments. Given an input pair (c, q) where c is context and q is question, we prepend the constrained prompt instructions along with an explicit directive for handling unanswerable questions. Given complex interactions between model architecture, pretraining, instructiontuning, dataset, question type, question answerability, context perturbations, and abstention, these settings confer significant complexity for our analysis. We therefore reserve the analysis of few-shot experiments and in-context learning to future work. We also conduct ablations with different prompting templates and abstention representations (results for these experiments in Appendix C). Datasets Unanswerable Examples SQuAD2 Q: Who moved to Hollywood in 2004? C: \u201c.... Following the move to Holyrood in 2004 this building was demolished. The former Midlothian County Buildings facing Parliament Square...\u201d PubmedQA Q: Does rugby headgear prevent concussion? C: \u201c...In addition, coaches from all four levels were questioned about team policies and their personal opinions about the use of headgear to prevent concussion. Although the players tended to believe that the headgear could prevent concussion (62%), the coaches were less convinced (33%)...\u201d QASPER Q: How many Universal Dependency features are considered? C: Empty. Table 2: Example unanswerable questions from datasets (Q: question, C: context). 4.2 Context Perturbation We conduct experiments to assess model sensitivity to context perturbations. We either provide the given context, or perturb the context by removing, replacing, or adding context passages (Figure 1). Given context: We use the original/gold context provided by the dataset. For unanswerable questions in SQuAD2, given context is unchanged but the designers manually modified the question to render the context ineffective for inferring an answer. For PubmedQA, we label a question unanswerable if the given answer is \"maybe\" and do not change the context. For unanswerable questions in QASPER, given context is empty. No context: We remove the given context. Random irrelevant context: We replace the given context with the context from a random question in the train split. Noisy context: We append context from a random question to the given context. 4.3 Models We conduct experiments using LLamaV2-13b-chat (LLama2), Vicuna1.5-13b-chat (Vicuna), FlanT5-XL (Flan-T5), and gpt-3.5-turbo-0613 (GPT3.5). We select these models to have reasonable representation across the following attributes: \u2022 Closed API (GPT3.5) vs open weights (LLama2, Vicuna, and Flan-T5) \u2022 Encoder-decoder (Flan-T5) vs decoder-only (LLama2, Vicuna, and GPT3.5) architecture \u2022 Models having less (LLama2) or more (Vicuna) instruction tuning For all models, we use the same hyperparameters at inference time. We set temperature to 0 and top-p sampling to 1 to reduce the variability of model output (details in Appendix B). Post-processing model output LLMs usually produce long outputs, so we post-process their outputs to obtain structured predicted answers. For boolean questions, we extract the first words from the model output. If these words contain \u201cyes,\u201d \u201cno,\u201d or \u201cunanswerable,\u201d we map them to the corresponding classes. 4.4 Evaluation metrics We report both task performance (F1 or Accuracy) and abstention performance (rate of abstention) across different experimental settings. Task performance For SQuAD2 and QASPER, we evaluate task performance using n-gram F1 as reported in Rajpurkar et al. (2018). For PubmedQA and BioASQ (boolean questions only), we report Accuracy based on the original papers. Task performance is therefore comparable under different context settings, but is not comparable across datasets. Abstention performance We measure model abstention performance by calculating the proportion of questions that is answered \u201cunanswerable.\u201d Ideally, models that are perfect at abstaining would be expected to have an abstention proportion of 1.0 when no or irrelevant context is provided for all questions. However, in reality this is not the case, since many questions may be context-independent or could be answered using model parametric knowledge. Abstention is therefore a quality dependent on (i) model ability to follow instructions (\u201cAnswer \u2018Unanswerable\u2019 when you are not sure\u201d), (ii) question context dependency (whether a question is answerable without context), and (iii) question difficulty (whether the question is answerable using model parametric knowledge). For clarity, we show task performance as plain numbers and abstention rates and deltas using (parentheses). 5 Results We report baseline task performance (Table 3) and abstention performance (Table 4) for all models on all datasets. Results for context perturbations SQuAD2 PubmedQA BioASQ QASPER SOTA 90.5* 77.6** 94.3** 61.4\u2020 (a) Given context LLama2 51.7 52.6 98.6 16.8 Vicuna 61.0 36.4 93.6 30.5 Flan-T5 87.4 73.2 97.8 60.1 GPT3.5 60.4 61.2 97.9 57.8 (b) No context LLama2 -11.7 -17.8 -43.6 -2.7 Vicuna -24.0 -19.4 -32.9 -10.4 Flan-T5 -38.2 -16.4 -30.8 -38.0 GPT3.5 -22.9 -39.0 -46.4 -37.7 (c) Random context LLama2 -1.9 -40.8 -63.6 -0.8 Vicuna -11.8 -25.2 -41.5 -11.5 Flan-T5 -37.3 -59.4 -63.5 -37.2 GPT3.5 -10.4 -50.0 -64.9 -37.6 (d) Noisy context LLama2 -4.3 -13.2 -10.7 +1.9 Vicuna -1.4 +4.8 -0.7 +4.7 Flan-T5 +0.0 -0.2 -0.7 +0.0 GPT3.5 -2.4 -2.8 -2.2 -2.4 Table 3: Model zero-shot task performance using constrained prompts. SOTA indicates the best zero-shot performance of LLMs reported in previous papers (*=FlanUL2 (Slobodkin et al., 2023), **=Galactica (Taylor et al., 2022)), or best performance from a pretrained LM (\u2020=UnifiedQA-large (Dasigi et al., 2021)). Best performances bolded, second best underlined among baseline performance (a). Colors indicate positive or negative delta from baseline task performance with different context perturbations: (b) no context, (c) random context, and (d) noisy context. While task performance generally degrades with context perturbations, this is not the case for QASPER due to interactions between abstention and task performance (see Section 5.1). are shown as changes to task performance and abstention performance from baseline. Additional analysis by question type is shown in Figure 2. Baseline task performance in the zero-shot prompting setting is presented in Table 3(a), alongside previously reported SOTA zero-shot performance of LLMs on each dataset. Surprisingly, Flan-T5 achieves the best performance on SQuAD2, PubmedQA and QASPER, LLama2 performs best on BioASQ. GPT3.5 achieves the second best performance (very close to the best one) on PubmedQA, BioASQ and QASPER. The task performance of tested models are close to and sometimes exceed the reported SOTA on each dataset. Though in this work, we focus on assessing the change to task performance under different context settings rather than maximizing model performance. 5.1 Impact of context perturbations Random context facilitates abstention In Table 4(c), we observe that all models are much more likely to abstain from answering when given random context except on the BioASQ dataset. For SQuAD2 and PubmedQA, random context improves most models\u2019 abstention performance to close to 1. Specifically, random context improves Flan-T5\u2019s ability to abstain from answering on PubmedQA while the no context setting fails to do so. For QASPER, the abstention performance on answerable questions in the random context setting is much higher compared to no context, but is still far from 1. Oddly, for boolean questions in BioASQ, random context biases the model towards answering \u201cno\u201d rather than \u201cunanswerable\u201d, and we observe no changes in abstention rate (further analysis in Section 6). Adding noisy context can counter-intuitively improve task performance on some datasets While noisy context has mixed effects on abstention (Table 4(d)), this perturbation does not always translate to negative impacts on task performance. In Table 3(d), task performance increases on QASPER for LLama2, Vicuna, and Flan-T5. Since QASPER\u2019s unanswerable questions have empty context, the baseline task performance for unanswerable questions is actually computed with no context; perturbing no context by adding noisy context therefore leads to a trade-off in task performance and abstention performance (some models abstain much more for unanswerable questions and task performance consequently improves). 5.2 Impact of question type Models\u2019 abstention capabilities vary by question type For extractive questions, Figure 2(a) and (d) show that model abstention varies significantly between given context and no context settings; under no context settings, models can achieve abstention performance close to 1. Abstractive answers show similar patterns. Notably, we find all models are reluctant to abstain from boolean questions for all datasets; this overconfidence is apparent even for QASPER, which is a highly document-specific QA dataset. Surprisingly, Flan-T5 and LLama2 demonstrate near zero abstention performance on boolean questions on any dataset, regardless of Baseline SQUAD2 Pubmed BioASQ Qasper Model Ans. Unans. Ans. Unans. Ans. Ans. Unans. LLama2 (44.7) (82.0) (37.3) (34.5) (0.0) (3.1) (30.9) Vicuna (21.1) (64.9) (61.8) (60.0) (0.0) (30.0) (57.0) Flan-T5 (4.4) (85.0) (0.0) (0.0) (0.0) (13.0) (87.0) GPT3.5 (3.7) (53.3) (20.4) (21.8) (0.0) (15.0) (89.2) (b) No context LLama2 ( +14.8 ) ( -11.7 ) ( -7.0 ) ( -3.6 ) ( +2.8 ) ( +17.7 ) ( +3.7 ) Vicuna ( +28.3 ) ( -4.7 ) ( +26.7 ) ( +27.3 ) ( +10.7 ) ( +38.2 ) ( +20.7 ) Flan-T5 ( +91.2 ) ( +11.3 ) ( +0.0 ) ( +0.0 ) ( +0.0 ) ( +65.0 ) ( -2.8 ) GPT3.5 ( +37.1 ) ( +2.5 ) ( +57.6 ) ( +56.4 ) ( +36.4 ) ( +73.8 ) ( +8.4 ) (c) Random context LLama2 ( +54.8 ) ( +17.2 ) ( +58.7 ) ( +60.0 ) ( +0.0 ) ( +16.9 ) ( +17.8 ) Vicuna ( +73.8 ) ( +31.7 ) ( +37.5 ) ( +40.0 ) ( +0.0 ) ( +44.0 ) ( +34.5 ) Flan-T5 ( +95.4 ) ( +15.0 ) ( +91.5 ) ( +92.7 ) ( +0.0 ) ( +54.6 ) ( -0.9 ) GPT3.5 ( +95.5 ) ( +45.9 ) ( +79.6 ) ( +78.2 ) ( +0.0 ) ( +66.9 ) ( +5.5 ) (d) Noisy context LLama2 ( +22.8 ) ( +3.2 ) ( +18.7 ) ( +18.2 ) ( +0.0 ) ( +2.2 ) ( +17.4 ) Vicuna ( +8.2 ) ( +2.5 ) ( -11.0 ) ( -12.7 ) ( +0.0 ) ( +13.2 ) ( +34.7 ) Flan-T5 ( -0.9 ) ( -1.3 ) ( +0.0 ) ( +0.0 ) ( +0.0 ) ( -1.4 ) ( -2.7 ) GPT3.5 ( +1.5 ) ( -2.8 ) ( +3.2 ) ( +0.0 ) ( +0.0 ) ( +4.5 ) ( +4.6 ) Table 4: Abstention performance across different models broken down by answerable and unanswerable questions across various datasets. Baseline abstention rates are shown at the top with a gray background. Abstention rates under (b) No context, (c) Random context, and (d) Noisy context settings are shown below as deltas from the base rate. Colors indicate a positive or negative delta. All context perturbations improve model abstention performance in some settings, though this is not uniform over datasets, models, question answerability, or perturbation setting. 0.0 0.2 0.4 0.6 0.8 1.0 Abstention Proportion Ext Abs Bool UnAns (a) SQuAD 0.0 0.2 0.4 0.6 0.8 1.0 Abstention Proportion Ext Abs Bool UnAns (b) PubmedQA 0.0 0.2 0.4 0.6 0.8 1.0 Abstention Proportion Ext Abs Bool UnAns (c) BioASQ 0.0 0.2 0.4 0.6 0.8 1.0 Abstention Proportion Ext Abs Bool UnAns (d) QASPER w/ context no context LLama2 Vicuna Flan-T5 GPT3.5 Figure 2: Abstention performance changes from \u201cwith context\u201d to \u201cno context\u201d settings across different question types. Each row represents one question type, from top to bottom: \u201cUnanswerable\u201d, \u201cBoolean\u201d, \u201cAbstractive\u201d and \u201cExtractive\u201d. White background indicates a dataset does not have questions of that type. whether or not context is provided.2 Vicuna and GPT3.5 perform only slightly better on this front. Abstention performance on unanswerable questions varies by dataset In Figure 2, we examine the abstention performance on unanswerable questions (yellow background) and find that per2We conduct prompt ablations in Appendix C, showing the lack of abstention on boolean questions is highly sensitive to prompt wording. Without a boolean instruction in the prompt, both models abstain well for boolean questions. formance varies significantly across datasets and models. For SQuAD2, GPT3.5 performs consistently regardless of whether context is provided, indicating that GPT3.5 may be able to recognize the irrelevance of the context. Surprisingly, LLama2 and Vicuna abstain less when context is removed, which contradicts intuition. In SQuAD2, unanswerable questions are still related to the provided context; which may explain why when the confusing context is removed, these two models beModel SQuAD2 PubmedQA BioASQ QASPER LLama2 29.8 16.6 2.8 20.6 Vicuna 33.6 29.0 10.7 44.9 Flan-T5 91.2 0.0 0.0 66.0 GPT3.5 38.8 59.3 36.4 75.7 Table 5: % of answerable questions for which the model changes from answering to abstaining when context is removed. Best performances bolded, second best underlined. come more likely to answer. For PubmedQA, unanswerable questions have high-uncertainty context; Flan-T5 and LLama2 consistently refuse to abstain from answering these questions, which are boolean, while Vicuna follows the \u201cunanswerable\u201d instruction quite well. For QASPER, abstention performance is generally higher across all models compared to SQuAD2 and PubmedQA since the questions are more document-grounded. 5.3 Abstention performance of LLMs Instruction-tuned models more readily abstain Table 5 shows the proportion of answers for which the model changed from answering to abstaining when context is removed. The expectation for context-dependent questions would be that a model would abstain from answering if no context is provided. GPT3.5 achieves the largest abstention changes on PubmedQA, BioASQ, and QASPER, indicating that it is responsive to context loss. Vicuna, as an instruction-tuned model, also demonstrates relatively strong abstention behavior in these cases, though to a lesser degree than GPT3.5 for all datasets. This effect is contrary to LLama2, which abstains at a similar rate regardless of whether context is provided. Flan-T5 readily abstains for non-boolean questions Flan-T5 abstains well when context is removed (Table 5). We observe large changes for SQuAD2 and QASPER (comparable or better than GPT3.5), although this behavior does not generalize at all to boolean questions\u2014the change is 0.0 for PubmedQA and BioASQ, which consist only of boolean questions. Different prompting strategies may be necessary to enjoin certain models to abstain when answering boolean questions. 6 Error Analysis We perform error analysis on the most consistently high performing model in our analysis, GPT3.5, and summarize main reasons why the model may fail to abstain. We sample 20 failure cases each from among answerable questions and unanswerable questions, for both gold context and no context settings. We present our findings by dataset. SQuAD2 Answerable questions are either general (e.g., \u201cWhat German poet was descended from Huguenots?\u201d) or could be answered without context (e.g., \u201cWhat types of pumps are typically used in industrial boilers?\u201d); the model tends to answer these regardless of whether context is provided. For unanswerable questions, half of failure cases are very open-ended like \u201cWhat effect do technologies and resources generate?\u201d and removing context will cause GPT3.5 to answer rather than abstain. Another 20% of questions contain popular entities but have no correct answer, e.g., \u201cWhat lake contains part of the Rhine Falls?\u201d misleads the model to generate \u201cLake Constance\u201d as an answer since \u201cThe Rhine emerges from Lake Constance.\u201d PubmedQA GPT3.5 is strongly inclined to answer \u201cyes\u201d for all failure cases among both answerable and unanswerable questions under the no context setting, perhaps due to the phrasing of automatically constructed questions. BioASQ Around 90% of failure cases we sampled were answered correctly by GPT3.5 without context, including both \u201cyes\u201d and \u201cno\u201d answers. Another 10% of cases are affected by the model\u2019s tendency to hallucinate \u201cyes\u201d, which is consistent with cases in PubmedQA. We manually substitute antonyms for the word in brackets on failure cases such as \u201cIs Apelin usually [decreased] in diabetes?\u201d, \u201cDoes vesatolimod [inhibit] TLR7?\u201d etc., and GPT3.5 still always answers \u201cyes\u201d. Interestingly, for these factoid questions, substituting gold with random context skews the model strongly towards answering \u201cno\u201d. These two types of hallucination behavior require further investigation. Our observations also raise concerns for assessing LLM performance using QA benchmarks containing boolean questions. QASPER For answerable questions, 50% of failures seem to be caused by GPT3.5\u2019s ignorance of the ambiguity resulting from anaphora; for instance, GPT3.5 should not be able to resolve terms in questions such as \u201cthey\u201d,\u201cthis study\u201d, and \u201cthe models\u201d without context, yet the model answers these questions anyway under the no context setting. Around 30% of questions are very general, such as \u201cWhat SQuAD2 QASPER Context Computational complexity theory is a branch of the theory of computation in theoretical computer science that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other... Table TABREF35 show the comparisons between tree and sequential based methods. We can see that, if we don\u2019t deploy CNN, simple Tree LSTM yields better result than traditional LSTM, but worse than Bidirectional LSTM... Prompt Q: What is a manual application of mathematical steps? Q: Do they separately evaluate performance of their learned representations (before forwarding them to the CNN layer)? Given context Unanswerable No \u2717 No context Calculation \u2717 Yes \u2717 Random context Unanswerable Unanswerable Noisy context Unanswerable Unanswerable \u2717 Ground truth Unanswerable Yes Table 6: Examples of GPT3.5 predictions under different context perturbations. For SQuAD2, removing context results in the model no longer abstaining, and responding inaccurately. For QASPER, the model answers the question incorrectly in the gold context setting; removing context results in an accurate but incorrect answer (the model should abstain in this case instead). is a soft label?\u201d and \u201cWhy is NER for tweets more challenging as the number of entities increases?\u201d\u2014 these questions lead the model to answer rather than abstain. For unanswerable questions, the distribution of failure reasons is similar. Table 6 shows two context perturbation examples with answers generated by GPT3.5. More examples are given in Appendix D. 7 Discussion & Conclusion Our study investigates the impacts of context removal and perturbation on LLM performance on scientific QA. While lack of correct context should result in a model abstaining from answering a question, our results highlight that there are inconsistent patterns of model behavior based on model pretraining paradigms, question types, and the contextdependence of various QA datasets. For example, perturbing given context by substituting with random irrelevant context or adding noisy context would be expected to reduce task performance, but in some cases, the improvements in abstention that result can negate any reductions in task performance and potentially lead to gains (e.g., LLama2, Vicuna, and Flan-T5 on QASPER). Additionally, we find that abstention varies greatly by question type, with all models in our experiments struggling to abstain on boolean questions. Future work for enhancing models\u2019 abstention ability could investigate the impact of (i) Different prompting strategies: since model abstention ability is sensitive to constrained prompts (such as boolean instructions) as shown in Section 5.2 and Appendix C, how to select good prompting strategies to produce the best trade-off between task performance and abstention performance remains an interesting problem; (ii) Alternate model architectures: smaller models such as Flan-T5XL with encoder-decoder architectures performed comparably to larger decoder-only models such as ChatGPT. Further exploration of encoder-decoder architectures or introducing an auxiliary module to foster understanding of context may be helpful; (iii) Other context perturbations: we show initial results that providing noisy context can counter-intuitively improve task performance in some datasets due to interactions with unanswerable questions. How this interacts with retrieval errors that occur in open-domain QA is a direction that could be explored in future work. Our results have direct implications for dataset curators and model developers. Benchmark QA datasets with mixtures of unanswerable and answerable questions were designed to facilitate assessment of abstention ability, yet our experiments show conflation and a lack of clear assessment of either abstention ability or performance accuracy. In other words, while unanswerable questions were motivated by the need to measure abstention, when aggregated with other questions during task performance evaluation, this distinction is lost. Additionally, unanswerable questions in different datasets measure different phenomena, e.g., in SQuAD2, they measure model sensitivity to irrelevant context passages, in QASPER, they indicate that a question is unanswerable based on the given document, neither of which clearly map to the notion of abstaining under insufficient information. Given the importance of assessing model abstention capabilities, separating task and abstention assessment, coupled with changes in dataset construction, is needed to better align model performance on these tasks with human expectations. Beyond dataset curation, other variables\u2014such as the task defined by the dataset, the types of questions posed, the architecture of models, instruction tuning techniques, in-context learning, and domain-specific pretraining\u2014can affect a model\u2019s ability to effectively abstain and provide accurate answers. In this work, we attempt to disentagle some of these factors, though not all. Future studies could explore the extent to which in-domain pretraining and abstention-specific instruction tuning techniques impact model abstention performance. On the other hand, our results also hold implications for downstream builders/users of interactive systems who rely on LLMs as question-answering tools. We show that in some cases (e.g., boolean questions), LLMs exhibit little to no ability to abstain, even for highly context-dependent questions such as those in QASPER. System designers should be cautious of these limitations and when/how to infer model uncertainty and communicate this uncertainty to users. For communicating abstention capabilities to users, confidence scores or similar indicators that reflect model certainty, accompanying explanations, or guidance on how to interpret model answers can empower users to make more informed decisions. Limitations This work evaluates LLM abstention in situations with incorrect or noisy context, which mimics retrieval errors in retrieval-augmented systems. The problem of abstaining is also crucial in other settings, such as from the fairness, privacy/copyright, and safety perspectives, which we do not address in this work. In the future, we plan to propose a unified framework for abstention evaluation that considers these settings as well. The interplay between abstention and task performance can only be studied in datasets with unanswerable questions, which is scarce in the landscape of QA datasets. We only conduct experiments on three such datasets, so our results may have difficulty generalizing. Additionally, we note the different ways that unanswerable questions were constructed among these datasets, which imply that our experiments may be measuring different notions of \u201cunanswerability.\u201d In addition, although question type diversity is common in real application, there are insufficients numbers of QA datasets with question type diversity. Therefore, our results may not generalize, as several question types are only represented in one or two of our datasets. We also note the sensitivity of LLMs to prompt phrasing, as with constrained and free-form prompts for boolean questions, and emphasize that it may be difficult to acquire consistent results without significant efforts made in prompt engineering. Due to the budget constraints, we could not conduct experiments with every single model, dataset, or context perturbation method from existing work, though our framework can be applied to other datasets and models. We will make our framework publicly available such that new models and datasets, as well as context perturbations could be added. Other factors affecting abstention such as in-domain pretraining and in-context learning were also not studied in this work and could be incorporated in the future." + }, + { + "url": "http://arxiv.org/abs/2212.09656v1", + "title": "Visconde: Multi-document QA with GPT-3 and Neural Reranking", + "abstract": "This paper proposes a question-answering system that can answer questions\nwhose supporting evidence is spread over multiple (potentially long) documents.\nThe system, called Visconde, uses a three-step pipeline to perform the task:\ndecompose, retrieve, and aggregate. The first step decomposes the question into\nsimpler questions using a few-shot large language model (LLM). Then, a\nstate-of-the-art search engine is used to retrieve candidate passages from a\nlarge collection for each decomposed question. In the final step, we use the\nLLM in a few-shot setting to aggregate the contents of the passages into the\nfinal answer. The system is evaluated on three datasets: IIRC, Qasper, and\nStrategyQA. Results suggest that current retrievers are the main bottleneck and\nthat readers are already performing at the human level as long as relevant\npassages are provided. The system is also shown to be more effective when the\nmodel is induced to give explanations before answering a question. Code is\navailable at \\url{https://github.com/neuralmind-ai/visconde}.", + "authors": "Jayr Pereira, Robson Fidalgo, Roberto Lotufo, Rodrigo Nogueira", + "published": "2022-12-19", + "updated": "2022-12-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.11416v5", + "title": "Scaling Instruction-Finetuned Language Models", + "abstract": "Finetuning language models on a collection of datasets phrased as\ninstructions has been shown to improve model performance and generalization to\nunseen tasks. In this paper we explore instruction finetuning with a particular\nfocus on (1) scaling the number of tasks, (2) scaling the model size, and (3)\nfinetuning on chain-of-thought data. We find that instruction finetuning with\nthe above aspects dramatically improves performance on a variety of model\nclasses (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and\nevaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For\ninstance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM\n540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves\nstate-of-the-art performance on several benchmarks, such as 75.2% on five-shot\nMMLU. We also publicly release Flan-T5 checkpoints, which achieve strong\nfew-shot performance even compared to much larger models, such as PaLM 62B.\nOverall, instruction finetuning is a general method for improving the\nperformance and usability of pretrained language models.", + "authors": "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei", + "published": "2022-10-20", + "updated": "2022-12-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.00093v3", + "title": "Large Language Models Can Be Easily Distracted by Irrelevant Context", + "abstract": "Large language models have achieved impressive performance on various natural\nlanguage processing tasks. However, so far they have been evaluated primarily\non benchmarks where all information in the input context is relevant for\nsolving the task. In this work, we investigate the distractibility of large\nlanguage models, i.e., how the model problem-solving accuracy can be influenced\nby irrelevant context. In particular, we introduce Grade-School Math with\nIrrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant\ninformation in the problem description. We use this benchmark to measure the\ndistractibility of cutting-edge prompting techniques for large language models,\nand find that the model performance is dramatically decreased when irrelevant\ninformation is included. We also identify several approaches for mitigating\nthis deficiency, such as decoding with self-consistency and adding to the\nprompt an instruction that tells the language model to ignore the irrelevant\ninformation.", + "authors": "Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Sch\u00e4rli, Denny Zhou", + "published": "2023-01-31", + "updated": "2023-06-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.02840v2", + "title": "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models", + "abstract": "Large-scale pre-trained language models have achieved tremendous success\nacross a wide range of natural language understanding (NLU) tasks, even\nsurpassing human performance. However, recent studies reveal that the\nrobustness of these models can be challenged by carefully crafted textual\nadversarial examples. While several individual datasets have been proposed to\nevaluate model robustness, a principled and comprehensive benchmark is still\nmissing. In this paper, we present Adversarial GLUE (AdvGLUE), a new multi-task\nbenchmark to quantitatively and thoroughly explore and evaluate the\nvulnerabilities of modern large-scale language models under various types of\nadversarial attacks. In particular, we systematically apply 14 textual\nadversarial attack methods to GLUE tasks to construct AdvGLUE, which is further\nvalidated by humans for reliable annotations. Our findings are summarized as\nfollows. (i) Most existing adversarial attack algorithms are prone to\ngenerating invalid or ambiguous adversarial examples, with around 90% of them\neither changing the original semantic meanings or misleading human annotators\nas well. Therefore, we perform a careful filtering process to curate a\nhigh-quality benchmark. (ii) All the language models and robust training\nmethods we tested perform poorly on AdvGLUE, with scores lagging far behind the\nbenign accuracy. We hope our work will motivate the development of new\nadversarial attacks that are more stealthy and semantic-preserving, as well as\nnew robust language models against sophisticated adversarial attacks. AdvGLUE\nis available at https://adversarialglue.github.io.", + "authors": "Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li", + "published": "2021-11-04", + "updated": "2022-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.10625v3", + "title": "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models", + "abstract": "Chain-of-thought prompting has demonstrated remarkable performance on various\nnatural language reasoning tasks. However, it tends to perform poorly on tasks\nwhich requires solving problems harder than the exemplars shown in the prompts.\nTo overcome this challenge of easy-to-hard generalization, we propose a novel\nprompting strategy, least-to-most prompting. The key idea in this strategy is\nto break down a complex problem into a series of simpler subproblems and then\nsolve them in sequence. Solving each subproblem is facilitated by the answers\nto previously solved subproblems. Our experimental results on tasks related to\nsymbolic manipulation, compositional generalization, and math reasoning reveal\nthat least-to-most prompting is capable of generalizing to more difficult\nproblems than those seen in the prompts. A notable finding is that when the\nGPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve\nthe compositional generalization benchmark SCAN in any split (including length\nsplit) with an accuracy of at least 99% using just 14 exemplars, compared to\nonly 16% accuracy with chain-of-thought prompting. This is particularly\nnoteworthy because neural-symbolic models in the literature that specialize in\nsolving SCAN are trained on the entire training set containing over 15,000\nexamples. We have included prompts for all the tasks in the Appendix.", + "authors": "Denny Zhou, Nathanael Sch\u00e4rli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi", + "published": "2022-05-21", + "updated": "2023-04-16", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.13712v1", + "title": "Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language Models", + "abstract": "This paper investigates the capabilities of Large Language Models (LLMs) in\nthe context of understanding their own knowledge and measuring their\nuncertainty. We argue this is an important feature for mitigating\nhallucinations. Specifically, we focus on addressing \\textit{known-unknown}\nquestions, characterized by high uncertainty due to the absence of definitive\nanswers. To facilitate our study, we collect a dataset with new Known-Unknown\nQuestions (KUQ) and propose a novel categorization scheme to elucidate the\nsources of uncertainty. Subsequently, we assess the LLMs' ability to\ndifferentiate between known and unknown questions and classify them\naccordingly. Moreover, we evaluate the quality of their answers in an\nOpen-Ended QA setting. To quantify the uncertainty expressed in the answers, we\ncreate a semantic evaluation method that measures the model's accuracy in\nexpressing uncertainty between known vs unknown questions.", + "authors": "Alfonso Amayuelas, Liangming Pan, Wenhu Chen, William Wang", + "published": "2023-05-23", + "updated": "2023-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.11915v2", + "title": "Challenges in Information-Seeking QA: Unanswerable Questions and Paragraph Retrieval", + "abstract": "Recent pretrained language models \"solved\" many reading comprehension\nbenchmarks, where questions are written with access to the evidence document.\nHowever, datasets containing information-seeking queries where evidence\ndocuments are provided after the queries are written independently remain\nchallenging. We analyze why answering information-seeking queries is more\nchallenging and where their prevalent unanswerabilities arise, on Natural\nQuestions and TyDi QA. Our controlled experiments suggest two headrooms --\nparagraph selection and answerability prediction, i.e. whether the paired\nevidence document contains the answer to the query or not. When provided with a\ngold paragraph and knowing when to abstain from answering, existing models\neasily outperform a human annotator. However, predicting answerability itself\nremains challenging. We manually annotate 800 unanswerable examples across six\nlanguages on what makes them challenging to answer. With this new data, we\nconduct per-category answerability prediction, revealing issues in the current\ndataset collection as well as task formulation. Together, our study points to\navenues for future research in information-seeking question answering, both for\ndataset creation and model development.", + "authors": "Akari Asai, Eunsol Choi", + "published": "2020-10-22", + "updated": "2021-06-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.16452v1", + "title": "Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine", + "abstract": "Generalist foundation models such as GPT-4 have displayed surprising\ncapabilities in a wide variety of domains and tasks. Yet, there is a prevalent\nassumption that they cannot match specialist capabilities of fine-tuned models.\nFor example, most explorations to date on medical competency benchmarks have\nleveraged domain-specific training, as exemplified by efforts on BioGPT and\nMed-PaLM. We build on a prior study of GPT-4's capabilities on medical\nchallenge benchmarks in the absence of special training. Rather than using\nsimple prompting to highlight the model's out-of-the-box capabilities, we\nperform a systematic exploration of prompt engineering. We find that prompting\ninnovation can unlock deeper specialist capabilities and show that GPT-4 easily\ntops prior leading results for medical benchmarks. The prompting methods we\nexplore are general purpose, and make no specific use of domain expertise,\nremoving the need for expert-curated content. Our experimental design carefully\ncontrols for overfitting during the prompt engineering process. We introduce\nMedprompt, based on a composition of several prompting strategies. With\nMedprompt, GPT-4 achieves state-of-the-art results on all nine of the benchmark\ndatasets in the MultiMedQA suite. The method outperforms leading specialist\nmodels such as Med-PaLM 2 by a significant margin with an order of magnitude\nfewer calls to the model. Steering GPT-4 with Medprompt achieves a 27%\nreduction in error rate on the MedQA dataset over the best methods to date\nachieved with specialist models and surpasses a score of 90% for the first\ntime. Beyond medical problems, we show the power of Medprompt to generalize to\nother domains and provide evidence for the broad applicability of the approach\nvia studies of the strategy on exams in electrical engineering, machine\nlearning, philosophy, accounting, law, nursing, and clinical psychology.", + "authors": "Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolo Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, Renqian Luo, Scott Mayer McKinney, Robert Osazuwa Ness, Hoifung Poon, Tao Qin, Naoto Usuyama, Chris White, Eric Horvitz", + "published": "2023-11-28", + "updated": "2023-11-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "I.2.7" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.11171v4", + "title": "Self-Consistency Improves Chain of Thought Reasoning in Language Models", + "abstract": "Chain-of-thought prompting combined with pre-trained large language models\nhas achieved encouraging results on complex reasoning tasks. In this paper, we\npropose a new decoding strategy, self-consistency, to replace the naive greedy\ndecoding used in chain-of-thought prompting. It first samples a diverse set of\nreasoning paths instead of only taking the greedy one, and then selects the\nmost consistent answer by marginalizing out the sampled reasoning paths.\nSelf-consistency leverages the intuition that a complex reasoning problem\ntypically admits multiple different ways of thinking leading to its unique\ncorrect answer. Our extensive empirical evaluation shows that self-consistency\nboosts the performance of chain-of-thought prompting with a striking margin on\na range of popular arithmetic and commonsense reasoning benchmarks, including\nGSM8K (+17.9%), SVAMP (+11.0%), AQuA (+12.2%), StrategyQA (+6.4%) and\nARC-challenge (+3.9%).", + "authors": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou", + "published": "2022-03-21", + "updated": "2023-03-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.09656v1", + "title": "Visconde: Multi-document QA with GPT-3 and Neural Reranking", + "abstract": "This paper proposes a question-answering system that can answer questions\nwhose supporting evidence is spread over multiple (potentially long) documents.\nThe system, called Visconde, uses a three-step pipeline to perform the task:\ndecompose, retrieve, and aggregate. The first step decomposes the question into\nsimpler questions using a few-shot large language model (LLM). Then, a\nstate-of-the-art search engine is used to retrieve candidate passages from a\nlarge collection for each decomposed question. In the final step, we use the\nLLM in a few-shot setting to aggregate the contents of the passages into the\nfinal answer. The system is evaluated on three datasets: IIRC, Qasper, and\nStrategyQA. Results suggest that current retrievers are the main bottleneck and\nthat readers are already performing at the human level as long as relevant\npassages are provided. The system is also shown to be more effective when the\nmodel is induced to give explanations before answering a question. Code is\navailable at \\url{https://github.com/neuralmind-ai/visconde}.", + "authors": "Jayr Pereira, Robson Fidalgo, Roberto Lotufo, Rodrigo Nogueira", + "published": "2022-12-19", + "updated": "2022-12-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.02311v5", + "title": "PaLM: Scaling Language Modeling with Pathways", + "abstract": "Large language models have been shown to achieve remarkable performance\nacross a variety of natural language tasks using few-shot learning, which\ndrastically reduces the number of task-specific training examples needed to\nadapt the model to a particular application. To further our understanding of\nthe impact of scale on few-shot learning, we trained a 540-billion parameter,\ndensely activated, Transformer language model, which we call Pathways Language\nModel PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML\nsystem which enables highly efficient training across multiple TPU Pods. We\ndemonstrate continued benefits of scaling by achieving state-of-the-art\nfew-shot learning results on hundreds of language understanding and generation\nbenchmarks. On a number of these tasks, PaLM 540B achieves breakthrough\nperformance, outperforming the finetuned state-of-the-art on a suite of\nmulti-step reasoning tasks, and outperforming average human performance on the\nrecently released BIG-bench benchmark. A significant number of BIG-bench tasks\nshowed discontinuous improvements from model scale, meaning that performance\nsteeply increased as we scaled to our largest model. PaLM also has strong\ncapabilities in multilingual tasks and source code generation, which we\ndemonstrate on a wide array of benchmarks. We additionally provide a\ncomprehensive analysis on bias and toxicity, and study the extent of training\ndata memorization with respect to model scale. Finally, we discuss the ethical\nconsiderations related to large language models and discuss potential\nmitigation strategies.", + "authors": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel", + "published": "2022-04-05", + "updated": "2022-10-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.11877v2", + "title": "The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models", + "abstract": "Large language models (LLMs) have been shown to possess impressive\ncapabilities, while also raising crucial concerns about the faithfulness of\ntheir responses. A primary issue arising in this context is the management of\n(un)answerable queries by LLMs, which often results in hallucinatory behavior\ndue to overconfidence. In this paper, we explore the behavior of LLMs when\npresented with (un)answerable queries. We ask: do models represent the fact\nthat the question is (un)answerable when generating a hallucinatory answer? Our\nresults show strong indications that such models encode the answerability of an\ninput query, with the representation of the first decoded token often being a\nstrong indicator. These findings shed new light on the spatial organization\nwithin the latent representations of LLMs, unveiling previously unexplored\nfacets of these models. Moreover, they pave the way for the development of\nimproved decoding techniques with better adherence to factual generation,\nparticularly in scenarios where query (un)answerability is a concern.", + "authors": "Aviv Slobodkin, Omer Goldman, Avi Caciularu, Ido Dagan, Shauli Ravfogel", + "published": "2023-10-18", + "updated": "2023-11-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.01652v5", + "title": "Finetuned Language Models Are Zero-Shot Learners", + "abstract": "This paper explores a simple method for improving the zero-shot learning\nabilities of language models. We show that instruction tuning -- finetuning\nlanguage models on a collection of tasks described via instructions --\nsubstantially improves zero-shot performance on unseen tasks.\n We take a 137B parameter pretrained language model and instruction-tune it on\nover 60 NLP tasks verbalized via natural language instruction templates. We\nevaluate this instruction-tuned model, which we call FLAN, on unseen task\ntypes. FLAN substantially improves the performance of its unmodified\ncounterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we\nevaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE,\nBoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number\nof finetuning datasets, model scale, and natural language instructions are key\nto the success of instruction tuning.", + "authors": "Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le", + "published": "2021-09-03", + "updated": "2022-02-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.09085v1", + "title": "Galactica: A Large Language Model for Science", + "abstract": "Information overload is a major obstacle to scientific progress. The\nexplosive growth in scientific literature and data has made it ever harder to\ndiscover useful insights in a large mass of information. Today scientific\nknowledge is accessed through search engines, but they are unable to organize\nscientific knowledge alone. In this paper we introduce Galactica: a large\nlanguage model that can store, combine and reason about scientific knowledge.\nWe train on a large scientific corpus of papers, reference material, knowledge\nbases and many other sources. We outperform existing models on a range of\nscientific tasks. On technical knowledge probes such as LaTeX equations,\nGalactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also\nperforms well on reasoning, outperforming Chinchilla on mathematical MMLU by\n41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It\nalso sets a new state-of-the-art on downstream tasks such as PubMedQA and\nMedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general\ncorpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these\nresults demonstrate the potential for language models as a new interface for\nscience. We open source the model for the benefit of the scientific community.", + "authors": "Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, Robert Stojnic", + "published": "2022-11-16", + "updated": "2022-11-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.07830v3", + "title": "Reframing Instructional Prompts to GPTk's Language", + "abstract": "What kinds of instructional prompts are easier to follow for Language Models\n(LMs)? We study this question by conducting extensive empirical analysis that\nshed light on important features of successful instructional prompts.\nSpecifically, we study several classes of reframing techniques for manual\nreformulation of prompts into more effective ones. Some examples include\ndecomposing a complex task instruction into multiple simpler tasks or itemizing\ninstructions into sequential steps. Our experiments compare the zero-shot and\nfew-shot performance of LMs prompted with reframed instructions on 12 NLP tasks\nacross 6 categories. Compared with original instructions, our reframed\ninstructions lead to significant improvements across LMs with different sizes.\nFor example, the same reframed prompts boost few-shot performance of\nGPT3-series and GPT2-series by 12.5% and 6.7% respectively averaged over all\ntasks. Furthermore, reframed instructions reduce the number of examples\nrequired to prompt LMs in the few-shot setting. We hope these\nempirically-driven techniques will pave the way towards more effective future\nprompting algorithms.", + "authors": "Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, Hannaneh Hajishirzi", + "published": "2021-09-16", + "updated": "2022-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1910.10683v4", + "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", + "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task\nbefore being fine-tuned on a downstream task, has emerged as a powerful\ntechnique in natural language processing (NLP). The effectiveness of transfer\nlearning has given rise to a diversity of approaches, methodology, and\npractice. In this paper, we explore the landscape of transfer learning\ntechniques for NLP by introducing a unified framework that converts all\ntext-based language problems into a text-to-text format. Our systematic study\ncompares pre-training objectives, architectures, unlabeled data sets, transfer\napproaches, and other factors on dozens of language understanding tasks. By\ncombining the insights from our exploration with scale and our new ``Colossal\nClean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks\ncovering summarization, question answering, text classification, and more. To\nfacilitate future work on transfer learning for NLP, we release our data set,\npre-trained models, and code.", + "authors": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu", + "published": "2019-10-23", + "updated": "2023-09-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.10526v3", + "title": "Open Domain Multi-document Summarization: A Comprehensive Study of Model Brittleness under Retrieval", + "abstract": "Multi-document summarization (MDS) assumes a set of topic-related documents\nare provided as input. In practice, this document set is not always available;\nit would need to be retrieved given an information need, i.e. a question or\ntopic statement, a setting we dub \"open-domain\" MDS. We study this more\nchallenging setting by formalizing the task and bootstrapping it using existing\ndatasets, retrievers and summarizers. Via extensive automatic and human\nevaluation, we determine: (1) state-of-the-art summarizers suffer large\nreductions in performance when applied to open-domain MDS, (2) additional\ntraining in the open-domain setting can reduce this sensitivity to imperfect\nretrieval, and (3) summarizers are insensitive to the retrieval of duplicate\ndocuments and the order of retrieved documents, but highly sensitive to other\nerrors, like the retrieval of irrelevant documents. Based on our results, we\nprovide practical guidelines to enable future work on open-domain MDS, e.g. how\nto choose the number of retrieved documents to summarize. Our results suggest\nthat new retrieval and summarization methods and annotated resources for\ntraining and evaluation are necessary for further progress in the open-domain\nsetting.", + "authors": "John Giorgi, Luca Soldaini, Bo Wang, Gary Bader, Kyle Lo, Lucy Lu Wang, Arman Cohan", + "published": "2022-12-20", + "updated": "2023-10-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1906.06045v1", + "title": "Learning to Ask Unanswerable Questions for Machine Reading Comprehension", + "abstract": "Machine reading comprehension with unanswerable questions is a challenging\ntask. In this work, we propose a data augmentation technique by automatically\ngenerating relevant unanswerable questions according to an answerable question\npaired with its corresponding paragraph that contains the answer. We introduce\na pair-to-sequence model for unanswerable question generation, which\neffectively captures the interactions between the question and the paragraph.\nWe also present a way to construct training data for our question generation\nmodels by leveraging the existing reading comprehension dataset. Experimental\nresults show that the pair-to-sequence model performs consistently better\ncompared with the sequence-to-sequence baseline. We further use the\nautomatically generated unanswerable questions as a means of data augmentation\non the SQuAD 2.0 dataset, yielding 1.9 absolute F1 improvement with BERT-base\nmodel and 1.7 absolute F1 improvement with BERT-large model.", + "authors": "Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu", + "published": "2019-06-14", + "updated": "2019-06-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.08773v4", + "title": "Cross-Task Generalization via Natural Language Crowdsourcing Instructions", + "abstract": "Humans (e.g., crowdworkers) have a remarkable ability in solving different\ntasks, by simply reading textual instructions that define them and looking at a\nfew examples. Despite the success of the conventional supervised learning on\nindividual datasets, such models often struggle with generalization across\ntasks (e.g., a question-answering system cannot solve classification tasks). A\nlong-standing challenge in AI is to build a model that learns a new task by\nunderstanding the human-readable instructions that define it. To study this, we\nintroduce NATURAL INSTRUCTIONS, a dataset of 61 distinct tasks, their\nhuman-authored instructions, and 193k task instances (input-output pairs). The\ninstructions are obtained from crowdsourcing instructions used to create\nexisting NLP datasets and mapped to a unified schema. Using this meta-dataset,\nwe measure cross-task generalization by training models on seen tasks and\nmeasuring generalization to the remaining unseen ones. We adopt generative\npre-trained language models to encode task-specific instructions along with\ninput and generate task output. Our results indicate that models benefit from\ninstructions when evaluated in terms of generalization to unseen tasks (19%\nbetter for models utilizing instructions). These models, however, are far\nbehind an estimated performance upperbound indicating significant room for more\nprogress in this direction.", + "authors": "Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi", + "published": "2021-04-18", + "updated": "2022-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.00295v1", + "title": "CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation", + "abstract": "The full power of human language-based communication cannot be realized\nwithout negation. All human languages have some form of negation. Despite this,\nnegation remains a challenging phenomenon for current natural language\nunderstanding systems. To facilitate the future development of models that can\nprocess negation effectively, we present CONDAQA, the first English reading\ncomprehension dataset which requires reasoning about the implications of\nnegated statements in paragraphs. We collect paragraphs with diverse negation\ncues, then have crowdworkers ask questions about the implications of the\nnegated statement in the passage. We also have workers make three kinds of\nedits to the passage -- paraphrasing the negated statement, changing the scope\nof the negation, and reversing the negation -- resulting in clusters of\nquestion-answer pairs that are difficult for models to answer with spurious\nshortcuts. CONDAQA features 14,182 question-answer pairs with over 200 unique\nnegation cues and is challenging for current state-of-the-art models. The best\nperforming model on CONDAQA (UnifiedQA-v2-3b) achieves only 42% on our\nconsistency metric, well below human performance which is 81%. We release our\ndataset, along with fully-finetuned, few-shot, and zero-shot evaluations, to\nfacilitate the development of future NLP methods that work on negated language.", + "authors": "Abhilasha Ravichander, Matt Gardner, Ana Marasovi\u0107", + "published": "2022-11-01", + "updated": "2022-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.09110v2", + "title": "Holistic Evaluation of Language Models", + "abstract": "Language models (LMs) are becoming the foundation for almost all major\nlanguage technologies, but their capabilities, limitations, and risks are not\nwell understood. We present Holistic Evaluation of Language Models (HELM) to\nimprove the transparency of language models. First, we taxonomize the vast\nspace of potential scenarios (i.e. use cases) and metrics (i.e. desiderata)\nthat are of interest for LMs. Then we select a broad subset based on coverage\nand feasibility, noting what's missing or underrepresented (e.g. question\nanswering for neglected English dialects, metrics for trustworthiness). Second,\nwe adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration,\nrobustness, fairness, bias, toxicity, and efficiency) for each of 16 core\nscenarios when possible (87.5% of the time). This ensures metrics beyond\naccuracy don't fall to the wayside, and that trade-offs are clearly exposed. We\nalso perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze\nspecific aspects (e.g. reasoning, disinformation). Third, we conduct a\nlarge-scale evaluation of 30 prominent language models (spanning open,\nlimited-access, and closed models) on all 42 scenarios, 21 of which were not\npreviously used in mainstream LM evaluation. Prior to HELM, models on average\nwere evaluated on just 17.9% of the core HELM scenarios, with some prominent\nmodels not sharing a single scenario in common. We improve this to 96.0%: now\nall 30 models have been densely benchmarked on the same core scenarios and\nmetrics under standardized conditions. Our evaluation surfaces 25 top-level\nfindings. For full transparency, we release all raw model prompts and\ncompletions publicly for further analysis, as well as a general modular\ntoolkit. We intend for HELM to be a living benchmark for the community,\ncontinuously updated with new scenarios, metrics, and models.", + "authors": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher R\u00e9, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda", + "published": "2022-11-16", + "updated": "2023-10-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2105.03011v1", + "title": "A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers", + "abstract": "Readers of academic research papers often read with the goal of answering\nspecific questions. Question Answering systems that can answer those questions\ncan make consumption of the content much more efficient. However, building such\ntools requires data that reflect the difficulty of the task arising from\ncomplex reasoning about claims made in multiple parts of a paper. In contrast,\nexisting information-seeking question answering datasets usually contain\nquestions about generic factoid-type information. We therefore present QASPER,\na dataset of 5,049 questions over 1,585 Natural Language Processing papers.\nEach question is written by an NLP practitioner who read only the title and\nabstract of the corresponding paper, and the question seeks information present\nin the full text. The questions are then answered by a separate set of NLP\npractitioners who also provide supporting evidence to answers. We find that\nexisting models that do well on other QA tasks do not perform well on answering\nthese questions, underperforming humans by at least 27 F1 points when answering\nthem from entire papers, motivating further research in document-grounded,\ninformation-seeking QA, which our dataset is designed to facilitate.", + "authors": "Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner", + "published": "2021-05-07", + "updated": "2021-05-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.09110v2", + "title": "Holistic Evaluation of Language Models", + "abstract": "Language models (LMs) are becoming the foundation for almost all major\nlanguage technologies, but their capabilities, limitations, and risks are not\nwell understood. We present Holistic Evaluation of Language Models (HELM) to\nimprove the transparency of language models. First, we taxonomize the vast\nspace of potential scenarios (i.e. use cases) and metrics (i.e. desiderata)\nthat are of interest for LMs. Then we select a broad subset based on coverage\nand feasibility, noting what's missing or underrepresented (e.g. question\nanswering for neglected English dialects, metrics for trustworthiness). Second,\nwe adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration,\nrobustness, fairness, bias, toxicity, and efficiency) for each of 16 core\nscenarios when possible (87.5% of the time). This ensures metrics beyond\naccuracy don't fall to the wayside, and that trade-offs are clearly exposed. We\nalso perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze\nspecific aspects (e.g. reasoning, disinformation). Third, we conduct a\nlarge-scale evaluation of 30 prominent language models (spanning open,\nlimited-access, and closed models) on all 42 scenarios, 21 of which were not\npreviously used in mainstream LM evaluation. Prior to HELM, models on average\nwere evaluated on just 17.9% of the core HELM scenarios, with some prominent\nmodels not sharing a single scenario in common. We improve this to 96.0%: now\nall 30 models have been densely benchmarked on the same core scenarios and\nmetrics under standardized conditions. Our evaluation surfaces 25 top-level\nfindings. For full transparency, we release all raw model prompts and\ncompletions publicly for further analysis, as well as a general modular\ntoolkit. We intend for HELM to be a living benchmark for the community,\ncontinuously updated with new scenarios, metrics, and models.", + "authors": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher R\u00e9, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda", + "published": "2022-11-16", + "updated": "2023-10-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.11916v4", + "title": "Large Language Models are Zero-Shot Reasoners", + "abstract": "Pretrained large language models (LLMs) are widely used in many sub-fields of\nnatural language processing (NLP) and generally known as excellent few-shot\nlearners with task-specific exemplars. Notably, chain of thought (CoT)\nprompting, a recent technique for eliciting complex multi-step reasoning\nthrough step-by-step answer examples, achieved the state-of-the-art\nperformances in arithmetics and symbolic reasoning, difficult system-2 tasks\nthat do not follow the standard scaling laws for LLMs. While these successes\nare often attributed to LLMs' ability for few-shot learning, we show that LLMs\nare decent zero-shot reasoners by simply adding \"Let's think step by step\"\nbefore each answer. Experimental results demonstrate that our Zero-shot-CoT,\nusing the same single prompt template, significantly outperforms zero-shot LLM\nperformances on diverse benchmark reasoning tasks including arithmetics\n(MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin\nFlip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled\nObjects), without any hand-crafted few-shot examples, e.g. increasing the\naccuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with\nlarge InstructGPT model (text-davinci-002), as well as similar magnitudes of\nimprovements with another off-the-shelf large model, 540B parameter PaLM. The\nversatility of this single prompt across very diverse reasoning tasks hints at\nuntapped and understudied fundamental zero-shot capabilities of LLMs,\nsuggesting high-level, multi-task broad cognitive capabilities may be extracted\nby simple prompting. We hope our work not only serves as the minimal strongest\nzero-shot baseline for the challenging reasoning benchmarks, but also\nhighlights the importance of carefully exploring and analyzing the enormous\nzero-shot knowledge hidden inside LLMs before crafting finetuning datasets or\nfew-shot exemplars.", + "authors": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa", + "published": "2022-05-24", + "updated": "2023-01-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.16109v1", + "title": "zkLLM: Zero Knowledge Proofs for Large Language Models", + "abstract": "The recent surge in artificial intelligence (AI), characterized by the\nprominence of large language models (LLMs), has ushered in fundamental\ntransformations across the globe. However, alongside these advancements,\nconcerns surrounding the legitimacy of LLMs have grown, posing legal challenges\nto their extensive applications. Compounding these concerns, the parameters of\nLLMs are often treated as intellectual property, restricting direct\ninvestigations.\n In this study, we address a fundamental challenge within the realm of AI\nlegislation: the need to establish the authenticity of outputs generated by\nLLMs. To tackle this issue, we present zkLLM, which stands as the inaugural\nspecialized zero-knowledge proof tailored for LLMs to the best of our\nknowledge. Addressing the persistent challenge of non-arithmetic operations in\ndeep learning, we introduce tlookup, a parallelized lookup argument designed\nfor non-arithmetic tensor operations in deep learning, offering a solution with\nno asymptotic overhead. Furthermore, leveraging the foundation of tlookup, we\nintroduce zkAttn, a specialized zero-knowledge proof crafted for the attention\nmechanism, carefully balancing considerations of running time, memory usage,\nand accuracy.\n Empowered by our fully parallelized CUDA implementation, zkLLM emerges as a\nsignificant stride towards achieving efficient zero-knowledge verifiable\ncomputations over LLMs. Remarkably, for LLMs boasting 13 billion parameters,\nour approach enables the generation of a correctness proof for the entire\ninference process in under 15 minutes. The resulting proof, compactly sized at\nless than 200 kB, is designed to uphold the privacy of the model parameters,\nensuring no inadvertent information leakage.", + "authors": "Haochen Sun, Jason Li, Hongyang Zhang", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Starting with Zhang et al. in 2020 [65], the field of zero-knowledge machine learning inference has seen active development. Initial research, parallel to the surge in computer vision studies, primarily concentrated on authenticating inference results for computer vision tasks over convolutional neural networks (CNNs). Key contributions include zkCNN [38], ZEN [18], vCNN [35], pvCNN [57], zkML [33], Mystique [56], and ezDPS [55]. These works aimed to optimize the adaptation of the entire training process to zeroknowledge proof (ZKP) backends, such as zkSNARKS [3\u20135, 7, 10, 24, 28, 36, 42], by leveraging the special structures of computations within CNNs. Notably, zkCNN [38] introduced a specialized interactive proof protocol for convolutional layers based on the GKR protocol [27] and its refinements [60, 61, 66]. This protocol achieved efficient proofs (less than 2 minutes) on VGG-scale CNNs, highlighting the necessity of specialized protocols for realistic zeroknowledge machine learning inference. However, as of our current knowledge, there exists a gap in zero-knowledge inferences over LLMs. The intricate structures and enormous sizes of LLMs present challenges not addressed by previous studies focused on CNNs, necessitating novel theoretical and experimental developments. Conversely, while pioneering studies have addressed ZKPs for machine learning training, with works such as VeriML [68], proof of unlearning [17, 58], and zkPoT [25] focusing on elementary algorithms like Support Vector Machines (SVM), logistic regression, and small neural networks (up to several thousand parameters), 12 zkLLM: Zero Knowledge Proofs for Large Language Models CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. 128 256 512 1024 2048 4096 Sequence Length 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Proving time (s) 128 256 512 1024 2048 4096 Sequence Length 0 1 2 3 4 5 6 7 Proof size (kB) 128 256 512 1024 2048 4096 Sequence Length 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Verifying time (s) OPT-125M OPT-350M OPT-1.3B OPT-2.7B OPT-6.7B OPT-13B Figure 5: Overhead of zkAttn. extending zero-knowledge proofs to training LLMs may pose insurmountable challenges. The vast complexity inherent in training LLMs could render the zero-knowledge proofs for their training impractical.", + "pre_questions": [], + "main_content": "zkLLM: Zero Knowledge Proofs for Large Language Models Haochen Sun haochen.sun@uwaterloo.ca University of Waterloo Waterloo, Ontario, Canada Jason Li j2643li@uwaterloo.ca University of Waterloo Waterloo, Ontario, Canada Hongyang Zhang hongyang.zhang@uwaterloo.ca University of Waterloo Waterloo, Ontario, Canada ABSTRACT The recent surge in artificial intelligence (AI), characterized by the prominence of large language models (LLMs), has ushered in fundamental transformations across the globe. However, alongside these advancements, concerns surrounding the legitimacy of LLMs have grown, posing legal challenges to their extensive applications. Compounding these concerns, the parameters of LLMs are often treated as intellectual property, restricting direct investigations. In this study, we address a fundamental challenge within the realm of AI legislation: the need to establish the authenticity of outputs generated by LLMs. To tackle this issue, we present zkLLM, which stands as the inaugural specialized zero-knowledge proof tailored for LLMs to the best of our knowledge. Addressing the persistent challenge of non-arithmetic operations in deep learning, we introduce tlookup, a parallelized lookup argument designed for non-arithmetic tensor operations in deep learning, offering a solution with no asymptotic overhead. Furthermore, leveraging the foundation of tlookup, we introduce zkAttn, a specialized zero-knowledge proof crafted for the attention mechanism, carefully balancing considerations of running time, memory usage, and accuracy. Empowered by our fully parallelized CUDA implementation, zkLLM emerges as a significant stride towards achieving efficient zero-knowledge verifiable computations over LLMs. Remarkably, for LLMs boasting 13 billion parameters, our approach enables the generation of a correctness proof for the entire inference process in under 15 minutes. The resulting proof, compactly sized at less than 200 kB, is designed to uphold the privacy of the model parameters, ensuring no inadvertent information leakage. 1 INTRODUCTION The recent surge in artificial intelligence (AI), particularly with the advent of Large Language Models (LLMs) [1, 8, 12, 41, 51, 52], has profoundly transformed the world. However, these technological advances have also raised concerns about the legitimacy of these groundbreaking models, challenging the legal underpinnings of their extensive applications. For instance, in December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft, accusing them of using copyrighted material from the newspaper to train their chatbots. In October 2023, President Biden issued an executive order to address both the \"myriad benefits\" and \"substantial risks\" posed by AI. As laws and regulations around LLMs evolve and tighten, developing practical tools to verify the legitimacy of these models has become crucial. Consider the auditing process of a newly-released LLM, which is hosted on a cloud service (e.g., Microsoft Azure) with API access. Law enforcement queries the model using designated prompts to test if the LLM generates illegal output (e.g., untrue, violenceprompting, or racist). In the stringent legal context, the authenticity of the output must be established to exclude the possibility of cheating by manipulating the generated texts. On the other hand, although the architectures are typically described in technical reports, the trained parameters are concealed as the AI developers\u2019 intellectual properties, making direct examination of the model parameters impossible. This dilemma calls for the application of zero-knowledge proofs (ZKPs), which allow for verifiable computations over the neural networks while disclosing no information about the neural network parameters [18, 33, 35, 38, 55\u201357]. However, adapting existing ZKP techniques to modern LLMs, characterized by their immense scale, presents significant challenges. These models require substantial computational resources, which general-purpose ZKP frameworks [3\u20135, 7, 10, 24, 28, 36, 42], often unaware of LLM structure and limited in parallel computation support, struggle to provide. While early research has explored specialized cryptographic protocols for specific neural network architectures like convolutional neural networks (CNNs) [35, 38, 57], LLMs\u2019 complex internal structures necessitate further innovation in ZKP protocol design. This innovation is vital to avoid the excessive overhead typical of general-purpose ZKPs. LLMs involve many non-arithmetic operations, such as GELU [31] and SwiGLU [47] activation functions, which only partially align with current ZKP methods. Lookup arguments, trading memory consumption for faster runtimes, have been introduced [33] to handle these nonlinearities, but their straightforward application raises questions about manageable memory overhead. Moreover, the attention mechanism in LLMs [53], which is inherently multivariate and often employs the Softmax function, requires a tailored ZKP protocol design for effective management of proof overhead. Tackling this mechanism within ZKPs is challenging, particularly as its components are not typically found in previously explored neural network architectures, such as MLPs and CNNs. In these traditional models, Softmax functions are usually placed after the output layer and are therefore not considered in prior works on zero-knowledge verifiable deep learning. This setup is in stark contrast with LLMs, where Softmax functions are used extensively across multiple layers. This prevalent use in LLMs necessitates a more refined approach in ZKP design to ensure both precise and efficient zero-knowledge verification, especially given the unique challenges presented by the attention mechanism. In response to these challenges, we present zkLLM, the inaugural ZKP scheme specifically designed for LLMs. zkLLM empowers LLM owners to validate the integrity of inference outcomes to stakeholders, such as law enforcement agencies, thereby streamlining investigations involving LLMs while safeguarding intellectual property. Our key contributions are: 1 arXiv:2404.16109v1 [cs.LG] 24 Apr 2024 CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Haochen Sun, Jason Li, and Hongyang Zhang \u2022 We propose tlookup, a unique ZKP protocol for universal non-arithmetic operations in deep learning, to tackle the persistent challenge of verifying such operations (e.g., activation functions). tlookup adeptly handles overhead in two ways: analytically, it adds no asymptotic overhead in memory complexity or running time; practically, its design promotes a high level of parallelization, fully leveraging parallel computing resources (like GPUs) commonly used in LLM computing environments. \u2022 We introduce zkAttn, a ZKP specifically crafted for attention mechanisms in LLMs. Building upon tlookup and enhancing its capabilities, zkAttn mitigates the accuracy degradation and high overheads linked with bit-decompositions and polynomial approximations. It also removes the necessity to list all multivariate input-output pairs, a prerequisite in lookupbased methods, by harnessing the mathematical properties of the attention mechanism. This strategy strikes a balance between running time, memory usage, and accuracy, while maintaining security and privacy standards. \u2022 Our efficient CUDA implementation, in conjunction with the aforementioned technical advancements, positions zkLLM as the trailblazing ZKP for LLMs of sizes up to 13 billion parameters. zkLLM achieves reasonable proving times of 1-15 minutes and produces compact proofs smaller than 200kB. These proofs can be verified within 1-3 seconds by the verifier and guarantee no exposure of model parameters. 2 TECHNICAL OVERVIEW Compared with general-purpose counterparts, an efficient zeroknowledge proof system specialized for deep learning hinges critically upon two key requirements: \u2022 The capability for extensive parallelization (for example, using CUDA), which allows for the handling of proofs for the entire computational process in a reasonable timeframe. \u2022 The adept handling of non-arithmetic operations, encompassing activation functions among others. Although sumcheck-based protocols are known to be compatible with tensor structures common in deep learning computations [26, 38] , traditionally, they have depended on bit-decomposition methods for non-arithmetic operations. This dependence leads to an increase in prover overhead and restricts the variety of nonarithmetic operations that can be supported. In response, we have developed a novel sumcheck-based protocol for lookup arguments over tensors. Our design capitalizes on the following fact: for S \u2208F\ud835\udc37and T \u2208 F\ud835\udc41, the set inclusion S \u2286T holds if and only if there is an m such that \u00cd \ud835\udc56\u2208[\ud835\udc37] (\ud835\udc4b+S\ud835\udc56)\u22121 \u2261\u00cd \ud835\udc56\u2208[\ud835\udc41] m\ud835\udc56(\ud835\udc4b+T\ud835\udc56)\u22121 as rational functions over \ud835\udc4b\u2208F [30]. This equivalence can be verified by evaluating both expressions at a single point \ud835\udc4b\u2190\ud835\udefd, randomly selected by the verifier. Furthermore, m can be computed in \ud835\udc42(\ud835\udc37) time using straightforward counting. Hence, by calculating the elementwise multiplicative inversions A \u2190(\ud835\udefd+ S)\u22121 and B \u2190(\ud835\udefd+ T)\u22121, we can parallelize the sumcheck protocol for the identity (A.sum() = m\u22a4B) \u2227(A \u2299(\ud835\udefd+ S) = 1) \u2227(B \u2299(\ud835\udefd+ T) = 1) (1) This approach stands in contrast to the sequential lookup arguments [16, 22, 23, 44, 62, 63] that are based on univariate polynomials. For the attention mechanism widely applied in modern LLMs, represented by the equation Attention(Q, K, V) := Softmax \u0012 QK\u22a4 \u221a \ud835\udc51 \u0013 V, (2) the direct application of lookup arguments, such as tlookup, presents the impractical challenge of compiling all input-output pairs into a lookup table due to the multivariate nature of the attention mechanism. To achieve zero-knowledge verifiability with limited overhead for the attention mechanism, we introduce zkAttn, as depicted in Figure 1: (1) Implement the matrix multiplication between the query Q and keys K\u22a4, resulting in Z \u2190QK\u22a4. This process is verifiable through the dedicated sumcheck protocol designed specifically for matrix multiplications. (2) Exploit the shift-invariance property of Softmax to adjust each row of Z by a constant, represented as a vector \u02c6 z, so that exp(Z \u2212\u02c6 z1\u22a4) sums to 1 row-wise. This transformation renders the Softmax output equivalent to applying exp(\u00b7) element-wise to Z\u2032 := Z \u2212\u02c6 z1\u22a4. However, computing \u02c6 z from Z is highly intricate and not directly verifiable. (3) Transform Z\u2032 into negative \ud835\udc3e-digit base-\ud835\udc4fnumbers, with each Z\u2032 = \u2212\u00cd\ud835\udc3e\u22121 \ud835\udc58=0 \ud835\udc4f\ud835\udc58Z(\ud835\udc58). By utilizing the homomorphism of exp (\u00b7), the Softmax output Y is then expressed as Y = exp \u2212 \ud835\udc3e\u22121 \u2211\ufe01 \ud835\udc58=0 \ud835\udc4f\ud835\udc58Z(\ud835\udc58) ! = \ud835\udc3e\u22121 \u00d6 \ud835\udc58=0 exp \u0010 \u2212\ud835\udc4f\ud835\udc58Z(\ud835\udc58)\u0011 , (3) with a tlookup installed for each term of the \ud835\udc3ein the product to handle the non-arithmetic operation. (4) Rather than verifying the correctness of \u02c6 z directly, which is highly non-arithmetic, an additional check is introduced to ensure the rowwise sums of Y equal 1. (5) Implement another verifiable matrix multiplication between the Softmax output Y and the values V. Note that the above overview omits details about the handling of scaling factors and quantization errors for clarity. The design of zkAttn manages the overhead of verifiable computation for the highly non-arithmetic operations within the attention mechanism while preserving computational accuracy. 3 PRELIMINARIES 3.1 Notations We represent vectors and tensors in bold font, such as v for vectors and S for tensors. Consistent with the cryptographic frameworks we utilize, we apply 0-based indexing to all mathematical structures. For simple operations and indexing over tensors, we adhere to the PyTorch conventions, using notations like v[\ud835\udc56] or the more concise v\ud835\udc56for elements, S[\ud835\udc56,\ud835\udc570:\ud835\udc571,:] for slicing, and S.sum(axis=0) for aggregation along a specified dimension. Moreover, we denote the set of non-negative integers less than a positive integer \ud835\udc41using the compact form [\ud835\udc41], which signifies {0, 1, . . . , \ud835\udc41\u22121}. 2 zkLLM: Zero Knowledge Proofs for Large Language Models CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Q V K Z Z\u2032 matmul \ud835\udc19(0) \ud835\udc19(1) \ud835\udc19(2) \ud835\udc19(\ud835\udc3e\u22121) \ud835\udc18(0) \ud835\udc18(1) \ud835\udc18(2) \ud835\udc18(\ud835\udc3e\u22121) Y sum(axis=1)==1 matmul \ud835\udc4f-ary segments of \ud835\udc19\u2032 output exp(\ud835\udc4f0 \u22c5) exp(\ud835\udc4f1 \u22c5) exp(\ud835\udc4f2 \u22c5) exp(\ud835\udc4f\ud835\udc3e\u22121 \u22c5) tlookups for each segment \u220f verify \u0ddc \ud835\udc33 indirectly \u0ddc \ud835\udc33 ... ... normalize row-wise Figure 1: Overview of zkAttn for (2). Q: As the developer of an LLM, law enforcement seeks to verify the model\u2019s authenticity by having me execute the inference process using their designated prompts. I aim to establish the authenticity of my output without revealing any details about the model parameters. Can ZKP be utilized in the inference process? A: Yes, ZKP can be utilized in the inference process of an LLM to establish the authenticity of the output without revealing any details about the model parameters. This allows one party to prove to another that a certain statement is true, without conveying any additional information. In this case, it can prove that the model\u2019s output came from a specific model. Q: What are some benefits of applying ZKPs in this scenario? A: Utilizing ZKPs in the inference process of an LLM can provide authenticity and privacy. It can confirm the output\u2019s origin from a specific model without revealing any model details, protecting any sensitive or proprietary information. The generated proofs are verifiable, allowing anyone to confirm the output\u2019s authenticity. Some ZKP protocols are also scalable, accommodating large models and complex computations, which is beneficial for LLMs. Figure 2: An example dialogue with GPT-3.5 regarding zkLLM\u2019s motivation 3.2 Large language models, transformers and the attention mechanism Large Language Models (LLMs), exemplified by the GPT series [8, 41], PaLM [12], and LLaMa [51], are renowned for their exceptional performance in general-purpose language understanding and generation tasks. These models are built upon the transformer architecture, which is fundamentally based on the attention mechanism [53]. As depicted in Figure 3, LLMs typically consist of multiple layers that transform the embeddings of an input sequence of tokens using multi-head attention. In each attention head \ud835\udc56, parameterized by linear weights W\ud835\udc44 \ud835\udc56, W\ud835\udc3e \ud835\udc56, and W\ud835\udc49 \ud835\udc56, the queries Q\ud835\udc56\u2190XW\ud835\udc44 \ud835\udc56, keys K\ud835\udc56\u2190XW\ud835\udc3e \ud835\udc56, and values V\ud835\udc56\u2190XW\ud835\udc49 \ud835\udc56are computed. These components are then processed by the Attention function, concatenated, cat + proj feed forward MLP Q V K head Attention Q V K head Attention sequence of tokens sample output Layer \u00d7 \ud835\udc3f Softmax linear embedding Figure 3: Typical structure of LLMs and projected using another linear weight W\ud835\udc42: O \u2190Concat\ud835\udc56(Attention (Q\ud835\udc56, K\ud835\udc56, V\ud835\udc56)) W\ud835\udc42, (4) The output O is subsequently processed by a feed-forward multilayer perceptron (MLP). The activations from the final layer are then transformed into a probability distribution over the output tokens, from which the output sequence is autoregressively sampled. The Attention function, as defined in (2), effectively mimics cognitive attention and has significantly contributed to the success of LLMs. The adaptation of the attention mechanism for zero-knowledge verifiability is a primary focus of this study. 3.3 Sumcheck protocol, multilinear extensions and tensor operations The correctness of arithmetic tensor operations (e.g., matrix multiplication) is verified using the sumcheck protocol [13] over the multilinear extensions [39] of the tensors involved. Consider a tensor S \u2208F\ud835\udc370\u00d7\ud835\udc371\u00d7\u00b7\u00b7\u00b7\u00d7\ud835\udc37\ud835\udc3e\u22121 discretized into a finite field F via scaling and rounding. Without loss of generality, assume that \ud835\udc37\ud835\udc58s are all powers of 2 for 0 \u2264\ud835\udc58\u2264\ud835\udc3e\u22121, or zeropadding may be applied. Thus, writing indices in binary format, S can also be considered as a function S(\u00b7) : {0, 1} \u00cd\ud835\udc3e\u22121 \ud835\udc58=0 log2 \ud835\udc37\ud835\udc58\u2192F. Here, S (i0, i1, . . . , i\ud835\udc3e\u22121) = S[\ud835\udc560,\ud835\udc561,...,\ud835\udc56\ud835\udc3e\u22121] where i\ud835\udc58is the binary representation of any 0 \u2264\ud835\udc56\ud835\udc58\u2264\ud835\udc37\ud835\udc3e\u22121 \u22121. A multivariate polynomial e S (\u00b7) : F \u00cd\ud835\udc3e\u22121 \ud835\udc58=0 log2 \ud835\udc37\ud835\udc58\u2192F is a multilinear extension of S(\u00b7) such that e S (\u00b7) \u2261S(\u00b7) on {0, 1} \u00cd\ud835\udc3e\u22121 \ud835\udc58=0 log2 \ud835\udc37\ud835\udc58, practically implemented as e S (u0, u1, . . . , u\ud835\udc3e\u22121) = \u2211\ufe01 i\ud835\udc58\u2208{0,1}log2 \ud835\udc37\ud835\udc58 0\u2264\ud835\udc58\u2264\ud835\udc3e\u22121 e e \ud835\udc3e\u22121 \u00ca \ud835\udc58=0 u\ud835\udc58, \ud835\udc3e\u22121 \u00ca \ud835\udc58=0 i\ud835\udc58 ! S(i0, i1, . . . , i\ud835\udc3e\u22121), (5) wheree e (u, v) := \u00cd\ud835\udc51\u22121 \ud835\udc56=0 u\ud835\udc56v\ud835\udc56+ (1\u2212u\ud835\udc56)(1\u2212v\ud835\udc56) for any \ud835\udc51-dimensional u, v \u2208F\ud835\udc51, which reduces to the equality indicator 1{u=v} when restricted to u, v \u2208{0, 1}\ud835\udc51. The correctness of a tensor operation can be expressed as equalities over the tensors. For instance, for matrix multiplication C \u2190 AB, where C \u2208F\ud835\udc370\u00d7\ud835\udc372, A \u2208F\ud835\udc370\u00d7\ud835\udc371, and B \u2208F\ud835\udc371\u00d7\ud835\udc372, the correctness is characterized by C[\ud835\udc56,\ud835\udc57] = \u00cd\ud835\udc371\u22121 \ud835\udc58=0 A[\ud835\udc56,\ud835\udc58]B[\ud835\udc58,\ud835\udc57] for each \ud835\udc56, \ud835\udc57, or equivalently, 3 CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Haochen Sun, Jason Li, and Hongyang Zhang \u2211\ufe01 k\u2208{0,1}log2 \ud835\udc371 \u0010 \ud835\udc37\u22121 1 e C (i, j) \u2212e A (i, k) e B (k, j) \u0011 = 0. (6) By applying the Schwartz-Zippel Lemma [46, 69], with high probability, (6) holds for all i, j if and only if the random linear combination \u2211\ufe01 i\u2208{0,1}log2 \ud835\udc370 j\u2208{0,1}log2 \ud835\udc372 k\u2208{0,1}log2 \ud835\udc371 e e (u0 \u2295u2, i \u2295j) \u0010 \ud835\udc37\u22121 1 e C (i, j) \u2212e A (i, k) e B (k, j) \u0011 | {z } varies for other tensor operations = 0, (7) wheree e (\u00b7). Thus, the prover and the verifier can execute the sumcheck protocol [13], which proves the statements in the form of \u2211\ufe01 i\u2208{0,1}\ud835\udc51 \ud835\udc53(i) = 0 (8) for any \ud835\udc51-variate polynomial (\ud835\udc51= log2 \ud835\udc370 +log2 \ud835\udc371 +log2 \ud835\udc372 in the case of (7)). The prover time, proof size, and verifier time are \ud835\udc42(2\ud835\udc51), \ud835\udc42(\ud835\udc51), and \ud835\udc42(\ud835\udc51), respectively. At the end of the protocol, a claim about the value of \ud835\udc53(v) is made by the prover (where v \u223cF\ud835\udc51due to the randomness over the protocol execution), which is further reduced to the claimed evaluations of the multilinear extensions (i.e., e C (v0, v2), e A (v0, v1), e B (v1, v2) in (7) with the indices decomposed as v = v0 \u2295v1 \u2295v2 with the corresponding dimensionalities.) These claims are further verified via the proof of evaluations on the commitments of the tensors introduced in Section 3.4. Optimized adaptations of the sumcheck protocol are designed to align with standard operations in deep learning, such as matrix multiplication [26, 49] (see Section 6.1.1) and convolution [38]. The preservation of the tensor structure enables the parallelization of the proof. Moreover, zero-knowledge adaptations of the sumcheck protocol [11, 60, 61] have been developed to prevent any disclosure of information related to the tensors, while adding a negligible additional computational burden. 3.4 Polynomial Commitment The binding and hiding requirements for the tensors, in the form of multilinear extensions, which are considered the intellectual properties of the prover, are achieved using polynomial commitment schemes. Specifically, the following establishes the correctness of e S (v) in zero-knowledge for any tensor S (assumed to be one-dimensional for simplicity) and any v with matching dimensionality: \u2022 pp \u2190KeyGen(1\ud835\udf06) generates the public parameters used in the scheme, where \ud835\udf06is the security parameter of the scheme. \u2022 \u27e6S\u27e7\u2190Commit(S,\ud835\udc5f; pp) generates a binding and hiding commitment \u27e6S\u27e7of S, such that \u27e6S\u27e7leaks no information about S, and no polynomial-time adversary can compute S\u2032 \u2260S and \ud835\udc5f\u2032 such that \u27e6S\u27e7= Commit(S; pp,\ud835\udc5f). \u2022 (\ud835\udc66, \ud835\udf0b) \u2190ProveEval(S, \u27e6S\u27e7, v,\ud835\udc5f; pp) allows the prover to compute \ud835\udc66\u2190e S (v) for any v with matching dimensionality, and creates a proof of evaluation that \ud835\udc66= e S (v) with respect to the committed S. \u2022 True/False \u2190Verify(\ud835\udc66, \ud835\udf0b, \u27e6S\u27e7, v; pp) allows the verifier to verify the correctness of \ud835\udc66, such that \u2013 (Completeness) if (\ud835\udc66, \ud835\udf0b) = ProveEval(S, \u27e6S\u27e7, v,\ud835\udc5f; pp), then the output is True. \u2013 (Soundness) if \ud835\udc66\u2260e S (v), then the output is False with 1 \u2212negl (\ud835\udf06) probability. \u2013 (Zero-knowledge) the verifier learns no information beyond \ud835\udc66= e S (v). In the absence of ambiguity, we omit the randomness \ud835\udc5fand public parameters pp in the subsequent context. In this study, Hyrax [54], a variant of the Pedersen commitment [43] that does not require a trusted setup, is used as an instantiation of the polynomial commitment scheme. It operates on a cyclic group G (typically an elliptic curve), with the hardness assumption of the discrete log problem, and is isomorphic to the addition of F. Hyrax is homomorphic, such that Commit(S1,\ud835\udc5f1)+Commit(S2,\ud835\udc5f2) = Commit(S1 + S2,\ud835\udc5f1 + \ud835\udc5f2) for any two tensors S1, S2 and randomness \ud835\udc5f1,\ud835\udc5f2. Hyrax achieves linear complexity in Commit and ProveEval with respect to the dimensionality \ud835\udc37of the tensor S involved and can be further parallelized by concurrently handling operations on all dimensions. It also balances the commitment size, proof size, and verifier\u2019s proof evaluation time, all to sub-linear complexities of \ud835\udc42( \u221a \ud835\udc37),\ud835\udc42(log \ud835\udc37), and \ud835\udc42( \u221a \ud835\udc37) respectively. These improvements are adeptly adopted in zkLLM to minimize both computation and communication burdens. 3.5 Lookup arguments The lookup argument is commonly used to address non-arithmetic operations within the domain of zero-knowledge proofs [50]. Inspired by recent advances [16, 22, 23, 44, 62, 63], the incorporation of lookup arguments into zero-knowledge verifiable deep learning inference [33] has been pursued. In such a setting, a lookup argument verifies that each element in a secret tensor S\ud835\udc37, known only to the prover, is contained within a predefined table T \u2208F\ud835\udc41, mutually acknowledged by both parties. However, the requisite computations for lookup arguments are intrinsically sequential, which contradicts the parallelism preferred in deep learning environments. Furthermore, deploying lookup arguments entails a trade-off between sacrificing precision and incurring excessive memory consumption and trusted setup burdens due to the expansive size of the lookup tables necessary to cover all possible values (with \ud835\udc41being significantly large). In response to these challenges, our proposed lookup arguments for non-arithmetic tensor operations markedly enhance parallelization compared to the largely sequential approaches traditionally used in verifiable deep learning inference. Additionally, our novel proof protocol, tailored for the Softmax function within the attention mechanisms of Transformer models, is designed to optimize the balance between setup and proving times, memory consumptions, and precision. 3.6 Settings and security assumptions We follow the widely recognized framework for zero-knowledge verifiable inferences as outlined in prior research on zero-knowledge machine learning [18, 33, 35, 38, 55\u201357]. In this framework, the prover (such as an AI company) owns an LLM with a publicly known structure (e.g., described in the technical report), while considering the model\u2019s weights as its intellectual property. The prover 4 zkLLM: Zero Knowledge Proofs for Large Language Models CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. provides API access to this model for a verifier (like an AI regulation enforcer), who submits a prompt and requests formal proof that the inference result returned by the API is accurate in relation to the prompt and the confidential model. A semi-honest assumption is applied to the verifier: the verifier accurately reports the outcome of the proof verification (whether it is accepted or rejected) but endeavors to glean additional information about the LLM (like hidden parameters) beyond merely confirming the correctness of the inference result. In this study, we assume the use of a commitment scheme that ensures \ud835\udf06-bit security. Correspondingly, the computations are carried out within a finite field F, characterized by a prime order of at least \u03a9(22\ud835\udf06). Furthermore, we postulate that every aspect of the transformer model and the data\u2014including the number of layers, the dimensions of tensors, and the complexity of operations between them\u2014is polynomially bounded by \ud835\udf06. 4 tlookup: VERIFIABLE NON-ARITHMETIC OPERATIONS FOR DEEP LEARNING In this section, we introduce tlookup, our novel approach to addressing general non-arithmetic operations in deep learning. The tlookup design preserves the widely-used tensor-based structure, guaranteeing seamless compatibility with the established computational frameworks in deep learning. tlookup acts as a foundational component of zkAttn, our specialized ZKP protocol tailored for the attention mechanism, detailed in Section 5. Furthermore, tlookup is applicable to other non-arithmetic operations essential to the inference mechanisms within LLMs. We first reduce the non-arithmetic tensor operations to lookup arguments over tensors. Specifically, for a tensor S \u2208F\ud835\udc37, the prover aims to convince the verifier that each element of S exists within T \u2208F\ud835\udc41, a table that both parties have full knowledge of. The essence of our approach hinges on the subsequent lemma: Lemma 4.1 ([30]). Given tensors S \u2208F\ud835\udc37and T \u2208F\ud835\udc41, S \u2282T as sets if and only if there exists m \u2208F\ud835\udc41such that the following identity of rational functions is satisfied: \u2211\ufe01 \ud835\udc56\u2208[\ud835\udc37] 1 \ud835\udc4b+ S\ud835\udc56 = \u2211\ufe01 \ud835\udc56\u2208[\ud835\udc41] m\ud835\udc56 \ud835\udc4b+ T\ud835\udc56 . (9) When the condition S \u2282T holds, the prover constructs m as: m\ud835\udc56\u2190 \f \f\b \ud835\udc57: S\ud835\udc57= T\ud835\udc56 \t\f \f, for 0 \u2264\ud835\udc56\u2264\ud835\udc41\u22121. (10) The verifier can then confirm the equality presented in (9) by randomly choosing \ud835\udc4b\u2190\ud835\udefd\u223cF. By defining: A := \u0012 1 \ud835\udefd+ S\ud835\udc56 \u0013\ud835\udc37\u22121 \ud835\udc56=0 , B := \u0012 1 \ud835\udefd+ T\ud835\udc56 \u0013\ud835\udc41\u22121 \ud835\udc56=0 , (11) the aforementioned equality at the random point \ud835\udefdcan be restated as: \u2211\ufe01 \ud835\udc56\u2208[\ud835\udc37] A\ud835\udc56= \u2211\ufe01 \ud835\udc56\u2208[\ud835\udc41] m\ud835\udc56B\ud835\udc56. (12) Therefore, with the randomness u \u223cFlog2 \ud835\udc37and \ud835\udefc\u223cF, the sumcheck for the correctness of (11) and (12) can be formulated as 0 = \u00a9 \u00ad \u00ab \u2211\ufe01 i\u2208[\ud835\udc37] e A (i) \u2212 \u2211\ufe01 j\u2208[\ud835\udc41] e m (j) e B (j)\u00aa \u00ae \u00ac + \ud835\udefc\u00a9 \u00ad \u00ab \u2211\ufe01 i\u2208[\ud835\udc37] e e (u, i) e A (i) \u0010 e S (i) + \ud835\udefd \u0011 \u22121\u00aa \u00ae \u00ac + \ud835\udefc2 \u00a9 \u00ad \u00ab \u2211\ufe01 j\u2208[\ud835\udc41] e e \u0010 u\u0002 log2 \ud835\udc37 \ud835\udc41: \u0003, j \u0011 e B (j) \u0010 e T (j) + \ud835\udefd \u0011 \u22121\u00aa \u00ae \u00ac , (13) or equivalently, \ud835\udefc+ \ud835\udefc2 = \u2211\ufe01 i\u2208 \u0002 \ud835\udc37 \ud835\udc41 \u0003 \u2211\ufe01 j\u2208[\ud835\udc41] \u0010 e A (i \u2295j) \u0010 \ud835\udefce e (u, i \u2295j) \u0010 e S (i \u2295j) + \ud835\udefd \u0011 + 1 \u0011 + \ud835\udc41\ud835\udc37\u22121e B (j) \u0010 \ud835\udefc2e e \u0010 u\u0002 log2 \ud835\udc37 \ud835\udc41: \u0003, j \u0011 \u0010 e T (j) + \ud835\udefd \u0011 \u2212e m (j) \u0011\u0011 . (14) A comprehensive description of the procedure to validate S \u2282T is found in Protocol 1. In particular, in Line 1, tlookup-Setup(T) generates a short witness \u27e6T\u27e7to a prescribed table T known to both parties; in Line 4, the prover constructs m based on a tensor S and table T and commit to S and m using tlookup-Prep(S, T); finally, in Line 9, \u27e8P, V\u27e9.tlookup-Prove(\u27e6S\u27e7, \u27e6m\u27e7, \u27e6T\u27e7) is the interactive process of the prover P proving that a secret tensor S is elementwisely in T, which has been committed as \u27e6T\u27e7. Protocol 1 tlookup Require: The prover P knows S \u2208F\ud835\udc37. \ud835\udc41, \ud835\udc37are both powers of 2 such that \ud835\udc41divides \ud835\udc37. 1: procedure tlookup-Setup(T \u2208F\ud835\udc41) 2: return \u27e6T\u27e7\u2190Commit(T; 0) \u22b2No hiding required 3: end procedure 4: procedure P.tlookup-Prep(S \u2208F\ud835\udc37, T \u2208F\ud835\udc41) 5: Compute m = m(S, T) as (10) 6: P \u2192V : \u27e6S\u27e7\u2190Commit(S) 7: P \u2192V : \u27e6m\u27e7\u2190Commit(m) 8: end procedure 9: procedure \u27e8P, V\u27e9.tlookup-Prove(\u27e6S\u27e7, \u27e6m\u27e7, \u27e6T\u27e7) 10: V \u2192P : \ud835\udefd\u223cF 11: P computes A, B as (11) 12: P \u2192V : \u27e6A\u27e7\u2190Commit(A), \u27e6B\u27e7\u2190Commit(B) 13: P and V run the sumcheck on (14), followed by the proofs of evaluation on \u27e6A\u27e7, \u27e6B\u27e7, \u27e6S\u27e7, \u27e6m\u27e7and \u27e6T\u27e7. 14: end procedure Meanwhile, for elementwise non-arithmetic operations \ud835\udc53: X \u2192 Y over tensors where X, Y \u2282F, two lookup tables can be constructed: TX := (\ud835\udc65)\ud835\udc65\u2208X and TY := (\ud835\udc53(\ud835\udc65))\ud835\udc65\u2208X. To demonstrate that Y = \ud835\udc53(X) for some X, Y \u2208F\ud835\udc37(broadcasting \ud835\udc53over all dimensions), one can apply the idea of random linear combination to reduce the check to one instance of Protocol 1 where X + \ud835\udefcY \u2282TX + \ud835\udefcTY for \ud835\udefc\u223cF chosen by the verifier. 5 CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Haochen Sun, Jason Li, and Hongyang Zhang Example 4.2 (ReLU with rescaling). We first consider the rectified linear unit (ReLU), which is a common activation function in contemporary deep learning models, including Transformers. ReLUs are generally applied subsequent to linear layers (for instance, fully connected layers) where products are involved. In the scenario of fully quantized computation, it becomes necessary for the ReLU to incorporate rescaling as well. This is denoted as follows: A \u2190ReLU(Z) = \u0016 Z \ud835\udefe \u0019 \u22991 \u001a\u0016 Z \ud835\udefe \u0019 \u22650 \u001b , (15) where\ud835\udeferepresents the scaling factor used in the system (assumed to be even for simplicity). We assume that \u2212\ud835\udc35 2 \u2264 j Z \ud835\udefe m < \ud835\udc35 2 holds elementwise for a positive even integer \ud835\udc35. Considering that Z is decomposed as Z\u2032 := j Z \ud835\udefe m and R = Z\u2212\ud835\udefeZ\u2032, we establish a pair of input-output lookup tables for Z\u2032. These are defined as TX := \u0002 \u2212\ud835\udc35 2 , \ud835\udc35 2 \u22121 \u0003 and TY := T+ X (i.e., taking the maximum with 0 element-wise), and an additional lookup table for R as TR = \u0002 \u2212\ud835\udefe 2, \ud835\udefe 2 \u22121 \u0003. By requiring the prover to demonstrate to the verifier that Z\u2032 + \ud835\udefcA \u2282TX + \ud835\udefcTY for a random \ud835\udefc, and that R \u2282TR, both using Protocol 1, in addition to proving the decomposition as Z = \ud835\udefeZ\u2032 + R, we can sufficiently validate the correctness of inference through the ReLU function. Notably, unlike the brute-force method that employs a single lookup table and incurs an \ud835\udc42(\ud835\udc35\ud835\udefe) overhead in both running time and memory usage, the use of two lookup tables effectively reduces this overhead to \ud835\udc42(\ud835\udc35+ \ud835\udefe). Similarly, if\ud835\udefeis too large to fit the table into memory, it can be further divided into a \ud835\udc3e-digit \ud835\udefe 1 \ud835\udc3e-ary number. In this scenario, each of the \ud835\udc3edigits in the remainder corresponds to a separate tlookup, thus adequately covering all possible values of the remainder. However, resolving the long-standing problem of excessive memory consumption for lookup tables in the realm of deep learning requires additional efforts. Specifically, in Section 5, tlookup is further refined into zkAttn to address its multivariate and highly non-arithmetic nature, optimizing the balance among running time, memory consumption, and approximation error. 5 zkAttn: DEDICATED ZKP FOR THE ATTENTION MECHANISM IN LLMS The attention mechanism is a key component in modern transformers, including state-of-the-art LLMs. However, incorporating these mechanisms into ZKP backends has been challenging, primarily due to their distinctive mathematical properties. Specifically, the Softmax function, integral to the attention mechanism, involves non-arithmetic operations like exponentiation, and its multivariate aspect complicates the use of polynomial approximations for traditional ZKP backends. To address these challenges, we introduce zkAttn, a specialized ZKP tailored for the attention mechanism, designed to leverage its inherent mathematical characteristics effectively. 5.1 Formulation of zkAttn The attention mechanism, in its discretized form, accepts as input a value matrix V \u2208F\ud835\udc5b\u00d7\ud835\udc51, a key matrix K \u2208F\ud835\udc5b\u00d7\ud835\udc51, and a query matrix Q \u2208F\ud835\udc5a\u00d7\ud835\udc51. It produces the output Softmax \u0010 QK\u22a4 \u221a \ud835\udc51 \u0011 V subject to appropriate rescaling of the input and output due to quantization, where Softmax is applied row-wise. In this discussion, we focus on Attention(Q, K, V) = Softmax \u0010 Z \u221a \ud835\udc51 \u0011 , where the input matrix Z = QK\u22a4is presumed to be scaled by the scaling factor \ud835\udefefrom its actual values. It is assumed that \ud835\udc51is a constant, known to both the prover and verifier, stemming from the presumption of a known model architecture. Equivalently, for each row z = (\ud835\udc670,\ud835\udc671, . . . ,\ud835\udc67\ud835\udc5b\u22121) \u2208F\ud835\udc5b, the objective is to devise an algorithm that computes \ud835\udc60(z) := \u00a9 \u00ad \u00ad \u00ad \u00ad \u00ab exp \u0012 \ud835\udc67\ud835\udc56 \ud835\udefe \u221a \ud835\udc51 \u0013 \u00cd\ud835\udc5b\u22121 \ud835\udc57=0 exp \u0012 \ud835\udc67\ud835\udc57 \ud835\udefe \u221a \ud835\udc51 \u0013 \u00aa \u00ae \u00ae \u00ae \u00ae \u00ac \ud835\udc5b\u22121 \ud835\udc56=0 (16) in the real domain. Alternatively, it should compute its quantized counterpart \ud835\udf03\ud835\udc60(z), ensuring limited numerical error and manageable proof generation overhead. Here,\ud835\udf03represents the scaling factor of all Softmax outputs in the system. Notably, this factor differs from the scaling factor \ud835\udefeused for other matrices, such as Q, K, and V. For the sake of streamlined and verifiable rescaling in subsequent computations, we posit that \ud835\udf03is a multiple of \ud835\udefe. To circumvent the verification of real division operations\u2014which can lead to remainders after quantization\u2014we observe the following. By utilizing the shift-invariance property of Softmax and defining \u02c6 \ud835\udc67:= \ud835\udefe \u221a \ud835\udc51ln \u00a9 \u00ad \u00ab \ud835\udc5b\u22121 \u2211\ufe01 \ud835\udc57=0 exp \ud835\udc67\ud835\udc56 \ud835\udefe \u221a \ud835\udc51 ! \u00aa \u00ae \u00ac , (17) we derive that \ud835\udc60(z) = \ud835\udc60(z \u2212\u02c6 \ud835\udc67) = exp \ud835\udc67\ud835\udc56\u2212\u02c6 \ud835\udc67 \ud835\udefe \u221a \ud835\udc51 !!\ud835\udc5b\u22121 \ud835\udc56=0 . (18) It is imperative to understand that the computation of \u02c6 \ud835\udc67(\u230a\u02c6 \ud835\udc67\u2309to be specific, since \u02c6 \ud835\udc67is not an integer in general) in (17) is not directly verified due to its highly non-arithmetic nature. Instead, the prover ensures that the output of \ud835\udc60(z) adheres to proper normalization. In its quantized representation, the sum of its dimensions must equal \ud835\udf03. A certain degree of deviation is acceptable owing to quantization, and the precise bounds of this error will be elucidated in Section 7.1. Furthermore, beyond verifying normalization, there exists the challenge of crafting a scheme to compute the quantized exponentiation. This scheme should not only be accurate, approximating \ud835\udf03exp \u0012 \u00b7 \ud835\udefe \u221a \ud835\udc51 \u0013 with minimal error, but also be amenable to efficient verification through the proof protocol that will be subsequently introduced. Observe that, given the definition of \u02c6 \ud835\udc67, \ud835\udc67\ud835\udc56\u2212\u02c6 \ud835\udc67\u22640 for all \ud835\udc56s. On the other hand, as \ud835\udc67\ud835\udc56s are all real numbers involved in the matrix multiplication scaled by \ud835\udefe, it is also reasonable to assume that each \ud835\udc67\ud835\udc56\u2212\u02c6 \ud835\udc67is lower bounded by some integer \u2212\ud835\udc35such that \ud835\udefe\u226a\ud835\udc35\u226a|F|, such that (\u2212\ud835\udc35, 0] can accommodate the reasonable values of \ud835\udc67\ud835\udc56\u2212\u02c6 \ud835\udc67, but sufficiently small so as not to cause wraparounds in F. Without loss of generality, consider \ud835\udc35as a product of \ud835\udc3epositive integers, denoted as \ud835\udc35= \u00ce\ud835\udc3e\u22121 \ud835\udc58=0 \ud835\udc4f(\ud835\udc58). A bijection can then be established between [\ud835\udc35] and the product space \u00ce\ud835\udc3e\u22121 \ud835\udc58=0 h \ud835\udc4f(\ud835\udc58)i . By 6 zkLLM: Zero Knowledge Proofs for Large Language Models CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. defining \ud835\udc35(\ud835\udc58) as \ud835\udc35(\ud835\udc58) := ( 1, if \ud835\udc58= 0; \u00ce\ud835\udc58\u22121 \ud835\udc57=0 \ud835\udc4f(\ud835\udc57), 1 \u2264\ud835\udc58\u2264\ud835\udc3e\u22121, our bijection \ud835\udd1f: \u00ce\ud835\udc3e\u22121 \ud835\udc58=0 h \ud835\udc4f(\ud835\udc58)i \u2192[\ud835\udc35] can be expressed as \ud835\udd1f \u0010 \ud835\udc65(0),\ud835\udc65(1), . . . ,\ud835\udc65(\ud835\udc3e\u22121)\u0011 = \ud835\udc3e\u22121 \u2211\ufe01 \ud835\udc58=0 \ud835\udc65(\ud835\udc58)\ud835\udc35(\ud835\udc58). (19) Consequently, for each \u0010 \ud835\udc65(0),\ud835\udc65(1), . . . ,\ud835\udc65(\ud835\udc3e\u22121)\u0011 = \ud835\udd1f\u22121(\ud835\udc65), the following holds: exp \u2212\ud835\udc65 \ud835\udefe \u221a \ud835\udc51 ! = exp \u2212 \u00cd\ud835\udc3e\u22121 \ud835\udc58=0 \ud835\udc65(\ud835\udc58)\ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51 ! = \ud835\udc3e\u22121 \u00d6 \ud835\udc58=0 exp \u2212\ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51 \ud835\udc65(\ud835\udc58) ! . (20) Our objective is to compute the quantized representation of equation (20), taking into account the scaling factor \ud835\udefe. If we further decompose \ud835\udefeas \ud835\udf03= \u00ce\ud835\udc3e\u22121 \ud835\udc58=0 \ud835\udf03(\ud835\udc58) with non-negative values for \ud835\udf03(\ud835\udc58), equation (20) gives rise to \ud835\udf03exp \u2212\ud835\udc65 \ud835\udefe \u221a \ud835\udc51 ! = \ud835\udc3e\u22121 \u00d6 \ud835\udc58=0 \ud835\udf03(\ud835\udc58) exp \u2212\ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51 \ud835\udc65(\ud835\udc58) ! . (21) Following the decomposition in Equation (21), we can construct \ud835\udc3etlookup tables T(\ud835\udc58) = \u0010 T(\ud835\udc58) X , T(\ud835\udc58) Y \u0011 . Each table T(\ud835\udc58) comprises all potential input-output pairs corresponding to the \ud835\udc58-th term in the product of (21): T(\ud835\udc58) X := h \ud835\udc4f(\ud835\udc58)i , T(\ud835\udc58) Y := $ \ud835\udf03(\ud835\udc58) exp \u2212\ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51 \ud835\udc65 !'! \ud835\udc65\u2208[\ud835\udc4f(\ud835\udc58)] . (22) Given any input \ud835\udc67\u2208(\u2212\ud835\udc35, 0], the prover first decomposes \ud835\udd1f(\u2212\ud835\udc67) = \u0010 \ud835\udc65(0),\ud835\udc65(1), . . . ,\ud835\udc65(\ud835\udc3e\u22121)\u0011 according to (19). Each component \ud835\udc65(\ud835\udc58) is subsequently mapped to\ud835\udc66(\ud835\udc58) based on T(\ud835\udc58), resulting in the computation\ud835\udc66\u2190\u00ce\ud835\udc3e\u22121 \ud835\udc58=0 \ud835\udc66(\ud835\udc58). Subsequently, the prover must demonstrate to the verifier that: \ud835\udc67+ \ud835\udc3e\u22121 \u2211\ufe01 \ud835\udc58=0 \ud835\udc65(\ud835\udc58)\ud835\udc35(\ud835\udc58) = 0, (23) \u0010 \ud835\udc65(\ud835\udc58),\ud835\udc66(\ud835\udc58)\u0011 \u2208T(\ud835\udc58), \u22000 \u2264\ud835\udc58\u2264\ud835\udc3e\u22121, (24) \ud835\udc3e\u22121 \u00d6 \ud835\udc58=0 \ud835\udc66(\ud835\udc58) = \ud835\udc66. (25) Equation (23) confirms that the decomposition of \u2212\ud835\udc67is valid. Equation (24) ensures the correctness of the exponent in each component concerning the pre-computed values in T(\ud835\udc58), and (25) asserts that the output \ud835\udc66is accurately derived using the homomorphism of exponentiation from each factor \ud835\udc66(\ud835\udc58). Together, these three conditions guarantee the correct computation of the exponentiation operation, up to the rounding errors. Specifically: Lemma 5.1. Conditions (23), (24), and (25) imply: \ud835\udc66= \ud835\udc3e\u22121 \u00d6 \ud835\udc58=0 $ \ud835\udf03(\ud835\udc58) exp \u2212\ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51 \ud835\udc65(\ud835\udc58) !' , (26) where \u0010 \ud835\udc65(\ud835\udc58)\u0011\ud835\udc3e\u22121 \ud835\udc58=0 = \ud835\udd1f(\u2212\ud835\udc67) is the valid decomposition according to (19). The deviation between \ud835\udc66and the exact scaled exponent \ud835\udf03exp \ud835\udc67 \ud835\udefe \u221a \ud835\udc51 ! = \ud835\udc3e\u22121 \u00d6 \ud835\udc58=0 \ud835\udf03(\ud835\udc58) exp \u2212\ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51 \ud835\udc65(\ud835\udc58) ! arises only from the rounding of each factor. An in-depth analysis of this will be covered in Section 7.1, where we will also provide guidance on selecting the parameters \ud835\udf03(\ud835\udc58) and \ud835\udc35(\ud835\udc58). In the subsequent sections of this text, we delve into the protocol design, facilitating the batched verification of (23), (24), and (25) for each dimension of large tensors used in transformer computations. 5.1.1 Optimization for the most and least significant segments. For the uppermost significant \ud835\udc40segments, specifically \ud835\udc65(\ud835\udc58) with \ud835\udc3e\u2212 \ud835\udc40\u2264\ud835\udc58\u2264\ud835\udc3e\u22121, consider a scenario where if any of these segments \ud835\udc65(\ud835\udc58) have non-zero values, the resulting exponent exp \u2212\ud835\udc65 \ud835\udefe \u221a \ud835\udc51 ! \u2264exp \u2212\ud835\udc35\ud835\udc3e\u2212\ud835\udc40 \ud835\udefe \u221a \ud835\udc51 ! (27) approximates 0 closely enough that the output \ud835\udc66from (25) can be designated as 0. This outcome can be achieved by configuring each table T(\ud835\udc58) that \ud835\udc3e\u2212\ud835\udc40\u2264\ud835\udc58\u2264\ud835\udc3e\u22121 in (24), to yield \ud835\udc66(\ud835\udc58) = 0 for any \ud835\udc65(\ud835\udc58) > 0. Moreover, based on our initial design, instances where \ud835\udc65(\ud835\udc58) = 0, the value of \ud835\udc66(\ud835\udc58) defaults to j \ud835\udf03(\ud835\udc58)m . Clearly, assigning any value other than 1 to these \ud835\udf03(\ud835\udc58) would only amplify the errors in T(\ud835\udc58) and other tables, especially under the constraint that \u00ce\ud835\udc3e\u22121 \ud835\udc58=0 \ud835\udf03(\ud835\udc58) = \ud835\udf03is constant. Therefore, for these most significant \ud835\udc40 segments, the lookup tables T(\ud835\udc58) can be reduced to the indicator function \ud835\udc66(\ud835\udc58) = 1{\ud835\udc65(\ud835\udc58) = 0}. On the other hand, for the least significant \ud835\udc3fsegments \ud835\udc65(\ud835\udc58), indexed by 0 \u2264\ud835\udc58\u2264\ud835\udc3f\u22121, the expression exp \u0012 \u2212\ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51\ud835\udc65(\ud835\udc58) \u0013 tends to hover close to 1 for all possible values of 0 \u2264\ud835\udc65(\ud835\udc58) \u2264\ud835\udc4f(\ud835\udc58) \u22121. Given this, approximating the exponentiation as a constant of 1 incurs a negligible error. Analogous to the strategy for the most significant segments, it is efficient to set the scaling factors \ud835\udf03(\ud835\udc58) to 1, sidestepping larger alternatives. This approach frees up room for allocating larger \ud835\udf03(\ud835\udc58)s for segments indexed by \ud835\udc3f\u2264\ud835\udc58\u2264\ud835\udc3e\u2212\ud835\udc40\u22121, thereby enhancing precision for these segments. As a result, the constraint (24) for validating input-output pairs simplifies to \ud835\udc65(\ud835\udc58) \u2208 h \ud835\udc4f(\ud835\udc58)i , given that \ud835\udc66(\ud835\udc58) consistently equates to 1. As delineated in Section 7.1, employing these optimizations for both the most and least significant segments tightly upper bounds the error in zkAttn, aligning closely with the computational logic of the original neural networks. 5.2 zkAttn: the main protocol In Protocol 2, we present the technical details of zkAttn built upon tlookup, our protocol for general non-arithmetic operations in deep learning. zkAttn is separated into three steps. First, zkAttnSetup(\u00b7) (in Line 1) completes the setup for zkAttn by generating short witnesses of all tables involved. Then, in zkAttn-Compute(\u00b7), the prover is responsible for computing the output of the attention 7 CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Haochen Sun, Jason Li, and Hongyang Zhang mechanism and all necessary auxiliary tensors (e.g., the normalization constants, the inputs and outputs of each segment, and the recovered row-wise sum that should not excessively deviate from 1) for the zero-knowledge verifiability of the attention mechanism, and sends the commitments of these tensors to the verifier. Finally, the prover and verifier engage in the interactive protocol zkAttnProve(\u00b7) in Line 14, which involves proving the correctness of each segment and the normalization using the specialized lookup arguments of tlookup, as well as all arithmetic relations connecting the auxiliary tensors. 6 PUTTING EVERYTHING TOGETHER 6.1 Taxonomy of verifiable tensor operations In this section, we present a taxonomy of the tensor operations involved in modern LLMs and the customized handling of these operations by zkLLM. 6.1.1 Matrix multiplications. Matrix multiplication plays a crucial role in modern transformers, including all linear layers and positional encoding (e.g., RoPE [48]) in LLMs. Dedicated sumchecks [26] for matrix multiplications have achieved running times significantly lower than the computation itself. This development has been a key driver in the creation of specialized ZKPs for deep learning, a method also applied in this study to establish the correctness of all matrix products. To confirm C = AB, with A \u2208F\ud835\udc5a\u00d7\ud835\udc5band B \u2208F\ud835\udc5b\u00d7\ud835\udc5d, the prover and the verifier execute a sumcheck on e C (u, v) = \u2211\ufe01 i\u2208{0,1}\u2308log2 \ud835\udc5b\u2309 e A (u, i) e B (i, v) , (28) where u \u2208F\u2308log2 \ud835\udc5a\u2309and v \u2208F\u2308log2 \ud835\udc5d\u2309are selected at random by the verifier. This specialized proof for matrix multiplication ensures a prover time of \ud835\udc42(\ud835\udc5a\ud835\udc5b+ \ud835\udc5b\ud835\udc5d), faster than the computation process. 6.1.2 Activation functions. To enhance performance, modern LLMs have replaced traditional ReLU activation functions with smoother alternatives like SwiGLU [47] and GELU [31]. This transition necessitates extra efforts to make these new activation functions verifiable. The SwiGLU function, parameterized by \ud835\udefd, is defined as SwiGLU\ud835\udefd(\ud835\udc67,\ud835\udc67\u2032) := Swish\ud835\udefd(\ud835\udc67) \u00b7 \ud835\udc67\u2032, (29) with the Swish function being Swish\ud835\udefd(\ud835\udc67) := \ud835\udc67\u00b7Sigmoid(\ud835\udefd\ud835\udc67). Though the non-arithmetic Sigmoid function can be integrated into the proof system via tLookup, optimizing setup costs and memory usage is crucial. This is achieved by reducing the Sigmoid function to zkAttn, given Softmax \u0012\ud835\udc67 0 \u0013 = \u0012 Sigmoid(\ud835\udc67) Sigmoid(\u2212\ud835\udc67) \u0013 , thus circumventing the bottleneck of iterating over extensive input-output pairs. The GELU function, defined as GELU(\ud835\udc67) := \ud835\udc67\u03a6(\ud835\udc67) \u2248\ud835\udc67Sigmoid(1.702\ud835\udc67), is handled similarly. 6.1.3 Normalization. LLMs employ LayerNorm [2] and its variants (e.g., RMSNorm [64]) for training stability. Unlike batch normalization, which can be merged into preceding layers for verifiable inference, LayerNorm involves non-linear transformations within each sample x, described as \ud835\udc66\u2190 x \u2212E[x] \u221a\ufe01 Var[\ud835\udc65] + \ud835\udf16 . (30) The compound non-arithmetic operations of square-root and inverse are managed through two sequential tlookup steps. These steps are responsible for the verifiable downscaling of the input and the quantized compound operation, respectively. Similar to the ReLU example presented in Example 4.2, the implementation of the first tlookup for downscaling is designed to reduce the overall sizes of the lookup tables, thereby keeping memory usage at a reasonable level. 6.2 Assembly of the proofs Following the pioneering works [26, 38], zkLLM utilizes sumcheckbased proofs for computations across various components of LLMs. These proofs are assembled in reverse logical order of the arithmetic circuits, methodically reducing the claimed multilinear extension values of the output \ud835\udc66to those associated with the prompt X and the model parameters W. These are then verified straightforwardly and through proof of evaluations on the commitment \u27e6W\u27e7. For high-level clarity, zkLLM can be distilled into three main components, with additional commitments caused by tlookups omitted for simplicity: \u2022 \u27e6W\u27e7\u2190zkLLM-Commit(W, pp,\ud835\udc5f): The prover commits to the model parameters W using the public parameters (generators of the commitment scheme) pp and randomness \ud835\udc5f. \u2022 (\ud835\udc66, \ud835\udf0b) \u2190zkLLM-Prove(W, X, pp,\ud835\udc5f): The prover computes the output\ud835\udc66with the prompt X and model W, and assembles the proof \ud835\udf0busing the sumcheck protocols and proofs of evaluations as previously described. \u2022 \ud835\udc4f\u2190zkLLM-Verify(X,\ud835\udc66, \u27e6W\u27e7): The verifier checks the correctness of each sumcheck and proof of evaluation within \ud835\udf0b, outputting \ud835\udc4f= 1 to accept the proof (if all components are correctly verified), and \ud835\udc4f= 0 otherwise. 7 ANALYSIS 7.1 Error analysis on zkAttn In this section, we examine the error introduced by zkAttn, as discussed in Section 5. Our analysis serves three primary objectives: first, to establish an upper bound on the error, thereby demonstrating that our zkAttn design remains faithful to the original neural network; second, to fine-tune the parameters integral to zkAttn, aiming to minimize the error; and third, to determine an acceptable upper bound on the error upon the verification of proper normalization in zkAttn. The overall error of zkAttn originates from two sources, the rounding of the shifting factor \u02c6 \ud835\udc67in (17) that makes the normalization no longer perfect, and the rounding of each segment encoded in the lookup tables T(\ud835\udc58)s that introduces errors to the exponentiations. The bound of the overall error is stated in Theorem 7.1 and analyzed in details in Appendix C.1. 8 zkLLM: Zero Knowledge Proofs for Large Language Models CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Protocol 2 zkAttn (See Section 6.1.1 about the two matrix multiplications involved) Require: Both the prover P and the verifier V know: the lower bound of input \u2212\ud835\udc35, and the factorization \ud835\udc35= \u00ce\ud835\udc3e\u22121 \ud835\udc58=0 \ud835\udc4f(\ud835\udc58); the number of the most and least significant segments \ud835\udc40and \ud835\udc3fin Section 5.1.1; the scaling factor of the input \ud835\udefe, the output \ud835\udf03, and each segment \ud835\udf03(\ud835\udc58) for each \ud835\udc3f\u2264\ud835\udc58\u2264\ud835\udc3e\u2212\ud835\udc40\u22121; the parameters \ud835\udc5a,\ud835\udc5b,\ud835\udc51related to the dimensions of the input; the tolerable error \ud835\udc38in row-wise normalization. 1: procedure zkAttn-Setup(\ud835\udc35, \ud835\udc3e, \ud835\udc40, \ud835\udc3f, \u0010 \ud835\udc4f(\ud835\udc58)\u0011\ud835\udc3e\u22121 \ud835\udc58=0 ,\ud835\udefe,\ud835\udf03, (\ud835\udf03\ud835\udc3e)\ud835\udc3e\u2212\ud835\udc40\u22121 \ud835\udc58=\ud835\udc3f ,\ud835\udc5a,\ud835\udc5b,\ud835\udc51, \ud835\udc38) 2: for 0 \u2264\ud835\udc58\u2264\ud835\udc3e\u22121 do 3: \u0014 T(\ud835\udc58) X \u0015 \u2190tlookup-Setup(T(\ud835\udc58) X ) \u22b2T(\ud835\udc58) X = h \ud835\udc4f(\ud835\udc58)i , i.e., the input of the \ud835\udc58-th segment 4: end for 5: for \ud835\udc3f\u2264\ud835\udc58\u2264\ud835\udc3e\u2212\ud835\udc40\u22121 do 6: \u0014 T(\ud835\udc58) Y \u0015 \u2190tlookup-Setup(T(\ud835\udc58) Y ) \u22b2T(\ud835\udc58) Y as defined in (22), i.e., the output of the \ud835\udc58-th segment 7: end for 8: for \ud835\udc3e\u2212\ud835\udc40\u2264\ud835\udc58\u2264\ud835\udc3e\u22121 do 9: \u0014 T(\ud835\udc58) Y \u0015 \u2190tlookup-Setup(T(\ud835\udc58) Y ) \u22b2T(\ud835\udc58) Y = 1 nh \ud835\udc4f(\ud835\udc58)i = 0 o , i.e., the optimized output of the \ud835\udc58-th segment 10: end for 11: \u27e6TR\u27e7\u2190tlookup-Setup(TR) \u22b2TR = [\ud835\udf03\u2212\ud835\udc38,\ud835\udf03+ \ud835\udc38], i.e., all tolerable values of row-wise sum of the output 12: return \u0010\u0014 T(\ud835\udc58) X \u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=0 , \u0010\u0014 T(\ud835\udc58) Y \u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=\ud835\udc3f, \u27e6TR\u27e7 13: end procedure 14: procedure P.zkAttn-Compute(Z \u2208F\ud835\udc5a\u00d7\ud835\udc5b, \u0010 T(\ud835\udc58) X \u0011\ud835\udc3e\u22121 \ud835\udc58=0 , \u0010 T(\ud835\udc58) Y \u0011\ud835\udc3e\u22121 \ud835\udc58=\ud835\udc3f) \u22b2Some implicit parameters included in Setup(\u00b7) omitted 15: Z\u2032 \u2190Z \u2212\u230a\u02c6 z\u23091\u22a4, where \u02c6 z \u2208R\ud835\udc5ais computed row-wise as (17) 16: \u0010 X(\ud835\udc58)\u0011\ud835\udc3e\u22121 \ud835\udc58=0 \u2190\ud835\udd1f\u22121 (\u2212Z\u2032) \u22b2\u2212Z is decomposed elementwisely 17: for \ud835\udc58\u2190\ud835\udc3f, \ud835\udc3f+ 1, . . . , \ud835\udc3e\u22121 do 18: Y(\ud835\udc58) \u2190\ud835\udc53(\ud835\udc58) (X(\ud835\udc58)) elementwisely, where \ud835\udc53(\ud835\udc58) is defined by T(\ud835\udc58) X , T(\ud835\udc58) Y 19: end for 20: Y \u2190\u00c7\ud835\udc3e\u22121 \ud835\udc58=\ud835\udc3fY(\ud835\udc58) \u22b2Compute the final output 21: \u02c6 y \u2190Y.sum(axis = 1) \u22b2For checking the normalization of each row 22: P \u2192V : \u27e6Z\u27e7, \u27e6\u230a\u02c6 z\u2309\u27e7, \u0010\u0014 X(\ud835\udc58)\u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=0 , \u27e6Y\u27e7, \u0010\u0014 Y(\ud835\udc58)\u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=\ud835\udc3f, \u27e6\u02c6 y\u27e7 23: end procedure 24: procedure \u27e8P, V\u27e9.zkAttn-Prove(\u27e6Z\u27e7, \u27e6\u230a\u02c6 z\u2309\u27e7, \u0010\u0014 X(\ud835\udc58)\u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=0 , \u27e6Y\u27e7, \u0010\u0014 Y(\ud835\udc58)\u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=\ud835\udc3f, \u27e6\u02c6 y\u27e7) 25: for \ud835\udc58\u21900, 1, . . . , \ud835\udc3e\u22121 do 26: P.tlookup-Prep(X(\ud835\udc58), T(\ud835\udc58) X ) \u22b2 \u0014 m(\ud835\udc58) := m \u0010 X(\ud835\udc58), T(\ud835\udc58) X \u0011\u0015 transmitted to V 27: \u27e8P, V\u27e9.tlookup-Prove( \u0014 X(\ud835\udc58)\u0015 , \u0014 m(\ud835\udc58)\u0015 , \u0014 T(\ud835\udc58) X \u0015 ) \u22b2Prove the correctness on the \ud835\udc58-th segment 28: end for 29: V \u2192P : \ud835\udefc\u223cF 30: for \ud835\udc58\u2190\ud835\udc3f, \ud835\udc3f+ 1, . . . \ud835\udc3e\u22121 do 31: P.tlookup-Prep(X(\ud835\udc58) + \ud835\udefcY(\ud835\udc58), T(\ud835\udc58) X + \ud835\udefcT(\ud835\udc58) Y ) \u22b2 \u0014 m(\ud835\udc58) := m \u0010 X(\ud835\udc58) + \ud835\udefcY(\ud835\udc58), T(\ud835\udc58) X + \ud835\udefcT(\ud835\udc58) Y \u0011\u0015 transmitted to V 32: \u27e8P, V\u27e9.tlookup-Prove( \u0014 X(\ud835\udc58)\u0015 + \ud835\udefc \u0014 Y(\ud835\udc58)\u0015 , \u0014 m(\ud835\udc58)\u0015 , \u0014 T(\ud835\udc58) X \u0015 + \ud835\udefc \u0014 T(\ud835\udc58) Y \u0015 ) \u22b2Prove the correctness on the \ud835\udc58-th segment 33: end for 34: P.tlookup-Prep(\u02c6 y, TR) \u22b2\u27e6mR := m (Y, TR)\u27e7transmitted to V 35: \u27e8P, V\u27e9.tlookup-Prove(\u27e6\u02c6 y\u27e7, \u27e6mR\u27e7, \u27e6TR\u27e7) \u22b2Prove the correctness on the \ud835\udc58-th segment 36: P and V run the sumcheck for \ud835\udd1f(X0, X1, . . . , X\ud835\udc3e\u22121) + Z\u2032 = 0 \u00d3 Z\u2032 = Z \u2212\u230a\u02c6 z\u23091\u22a4\u00d3 Y = \u00c7\ud835\udc3e\u22121 \ud835\udc58=\ud835\udc3fY(\ud835\udc58) \u00d3 \u02c6 y = Y.sum(axis = 1), followed by the proof of evaluations on \u27e6Z\u27e7, \u27e6\u230a\u02c6 z\u2309\u27e7, \u0010\u0014 X(\ud835\udc58)\u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=0 , \u27e6Y\u27e7, \u0010\u0014 Y(\ud835\udc58)\u0015\u0011\ud835\udc3e\u22121 \ud835\udc58=\ud835\udc3f, \u27e6\u02c6 y\u27e7 37: end procedure Theorem 7.1 (Error bound). With the choice of \ud835\udc35\ud835\udc3e\u2212\ud835\udc40\u2190 \ud835\udefe \u221a \ud835\udc51 \ud835\udc3e\u2212\ud835\udc40\u2212\ud835\udc3f+ 1 ((\ud835\udc3e\u2212\ud835\udc40\u2212\ud835\udc3f) ln (2\ud835\udc5b) + ln\ud835\udf03) , (31) \ud835\udf03(\ud835\udc58) \u2190exp \ud835\udc35(\ud835\udc58) \ud835\udefe \u221a \ud835\udc51 (\ud835\udc4f(\ud835\udc58) \u22121) ! \ud835\udf03exp \u2212\ud835\udc35\ud835\udc3e\u2212\ud835\udc40\u2212\ud835\udc35\ud835\udc3f \ud835\udefe \u221a \ud835\udc51 !! 1 \ud835\udc3e\u2212\ud835\udc40\u2212\ud835\udc3f (32) and irrelevant of the choice of other \ud835\udc35\ud835\udc58s, the error bound in (48) can be minimized as \ud835\udf00attn = \ud835\udc42 \u0012 (\ud835\udc3e\u2212\ud835\udc40\u2212\ud835\udc3f) \u0010\ud835\udc5b \ud835\udf03 \u0011 1 \ud835\udc3e\u2212\ud835\udc40\u2212\ud835\udc3f+1 \u0013 . (33) 9 CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Haochen Sun, Jason Li, and Hongyang Zhang Theorem 7.1 have multiple implications: (1) Minimizing \ud835\udc3e\u2212\ud835\udc40\u2212\ud835\udc3f, (i.e., the number of segments that are designated as neither the most significant nor the least significant as in Section 5.1.1) and \ud835\udc35\ud835\udc3f(i.e., the magnitude of least significant segments) is in favour of reducing the error. However, as will be discussed in Section 7.3, this incurs an undue increase in the computational overhead due to the sizes of the lookup tables. (2) \ud835\udf00attn has no dependence on the segmentation of the zkAttn input except \ud835\udc35\ud835\udc3fand \ud835\udc35\ud835\udc3e\u2212\ud835\udc40, leaving room for distributing the sizes of lookup tables T(\ud835\udc58) evenly which compresses the computational overhead associated. (3) \ud835\udf00attn defines the tolerable error row-wise sum upon checking the normalization, i.e., the sum of each row must lie within [(1 \u2212\ud835\udf00attn)\ud835\udf03, (1 + \ud835\udf00attn)\ud835\udf03]. 7.2 Security and privacy analysis In this section, we formalize the security and privacy aspects of zkLLM, focusing on the tlookup protocol which facilitates zeroknowledge verifiable computations across all non-arithmetic components. We first address the completeness error of tlookup: Theorem 7.2 (Completeness of Protocol 1). Assuming the verifier V is semi-honest, Protocol 1 incurs a completeness error of \ud835\udc42 \u0010 \ud835\udc41 |F| \u0011 . Theorem 7.2 indicates that if the prover P follows the protocol, the proof produced using Protocol 1 has a mere \ud835\udc42 \u0010 \ud835\udc41 |F| \u0011 chance of being rejected. As detailed in Appendix C.2, this minor imperfection stems from the probability that the random challenge \ud835\udefd, chosen by the verifier in Line 10, might trigger a division-by-zero error in one of the terms on either side of Equation (9). On the other hand, all arithmetic tensor operations do not contribute any additional completeness error, thanks to the direct application of the sumcheck protocol. Therefore, the overall probability of an honest prover failing to convince a semi-honest verifier of the correctness of its inference through the LLM is at most \ud835\udc42 \u0010 \ud835\udc36 |F| \u0011 , where \ud835\udc36represents the total size of all tensors involved in non-arithmetic operations. Given that the entire complexity of the inference is poly(\ud835\udf06) and |F| = \u03a9 \u0010 22\ud835\udf06\u0011 , this error remains negligible in \ud835\udf06. Practically, as our implementation employs the BLS12-381 curve with |F| \u22482254, far exceeding the computational limits of current technology, verification has never failed in any experiment conducted, as reported in Section 8. Similarly, the soundness error of tlookup is also negligible in \ud835\udf06, as stated in Theorem 7.3. Coupled with the direct application of the sumcheck protocol and the proof-of-opening for the committed tensors, which also incur negligible errors in \ud835\udf06due to the polynomial \ud835\udf06assumption on the complexity of the entire inference process, we theoretically establish that a valid proof can confirm the correctness of the inference result except with only a negligible probability. Theorem 7.3 (Soundness of Protocol 1). For any probabilistic polynomial-time (p.p.t.) prover P, if in Line 6, the message P sends to V is \u27e6S\u27e7\u2190Commit(S) such that S \u2284T, then except with probability negl (\ud835\udf06), the execution of Protocol 1 is unsuccessful, resulting in the semi-honest verifier V rejecting the proof. Proof sketch of Theorem 7.3. By the binding property of the commitment scheme, except with probability negl (\ud835\udf06), in Line 13, the success of proofs of evaluations implies the correctness of all claimed multilinear extension values on A, B, S, m, and T. Subsequently, the success of the sumchecks implies with 1 \u2212\ud835\udc42 \u0010 \ud835\udc37 |F| \u0011 that all equalities in Equations (11) and (12) hold, such that \u2211\ufe01 \ud835\udc56\u2208[\ud835\udc37] 1 \ud835\udefd+ S\ud835\udc56 = \u2211\ufe01 \ud835\udc56\u2208[\ud835\udc41] m\ud835\udc56 \ud835\udefd+ T\ud835\udc56 . (34) Finally, given the randomness of \ud835\udefd, with probability 1 \u2212\ud835\udc42 \u0010 \ud835\udc41\ud835\udc37 |F| \u0011 , Equation (34) implies Equation (9), leading to the conclusion that S \u2282T. \u25a1 It is noteworthy, as elaborated in Section 7.1, that the correctness of zkAttn is quantified by an L1 error of \ud835\udf00attn. This measure of correctness similarly applies to other exponentiation-based activation functions. The correctness of all other non-arithmetic operations must also consider the quantization errors. For example, as highlighted in Example 4.2, a numerical error margin of 1 2\ud835\udefeis an inescapable consequence of the rescaling process. On the other hand, this degree of tolerance is also sufficient, as numerical errors exceeding 1 2\ud835\udefewould be detectable as incorrect computations and consequently rejected through the application of tlookup. Finally, with the application of zero-knowledge variations of sumcheck protocols [11, 38, 60, 61] and Pedersen commitment schemes [43], the proof assembled by zkLLM does not disclose any information about the protected model parameters. Formally, Theorem 7.4 (Zero-knowledge, adapted from [38]). Assuming the application of zero-knowledge variations of sumcheck protocols [11, 38, 60, 61] and Pedersen commitment schemes [43], there exists a simulator S = (S1, S2) such that the following two views are computationally indistinguishable to any probabilistic polynomial-time (PPT) algorithm A, given the public parameters pp (the generators used in the commitment scheme within the context of zkLLM): RealA,W(pp): 1: \u27e6W\u27e7\u2190zkLLM-Commit(W, pp,\ud835\udc5f) 2: X \u2190A(\u27e6W\u27e7, pp) 3: (\ud835\udc66, \ud835\udf0b) \u2190zkLLM-Prove(W, X, pp,\ud835\udc5f) 4: \ud835\udc4f\u2190A(\u27e6W\u27e7, X,\ud835\udc66, \ud835\udf0b, pp) 5: return \ud835\udc4f IdealA,SA (pp): 1: com \u2190S1(1\ud835\udf06, pp,\ud835\udc5f) 2: X \u2190A(com, pp) 3: (\ud835\udc66, \ud835\udf0b) \u2190SA 2 (com, X, pp,\ud835\udc5f), given oracle access to \ud835\udc66= zkLLM-compute(W, X) 4: \ud835\udc4f\u2190A(com, X,\ud835\udc66, \ud835\udf0b, pp) 5: return \ud835\udc4f For any PPT algorithm A and all LLM (represented by the parameter) W, there exists a simulator S such that \f \f \fP \u0000RealA,W(pp) = 1\u0001 \u2212P \u0010 IdealA,SA (pp) = 1 \u0011\f \f \f \u2264negl (\ud835\udf06) . (35) 10 zkLLM: Zero Knowledge Proofs for Large Language Models CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. 7.3 Overhead analysis In this section, we analyze the overhead of zkLLM, focusing on the running times for both the prover and the verifier, as well as the memory and communication costs. 7.3.1 Overhead of tlookup. In Protocol 1, we adhere to the assumption that \ud835\udc41= \ud835\udc42(\ud835\udc37). This guideline is strictly followed in our implementation due to the substantial sizes of tensors involved in LLM computations. The linear time complexity for both committing and proving in the Pedersen commitments and the sumcheck protocols results in a total computational complexity of \ud835\udc42(\ud835\udc37) for the prover in Protocol 1. Similarly, memory requirements are maintained at \ud835\udc42(\ud835\udc37), since all involved tensors, including the additionally computed m, A, and B, are of size \ud835\udc42(\ud835\udc37). The commitment and proof sizes are reduced to square root and logarithmic complexities, \ud835\udc42( \u221a \ud835\udc37) and \ud835\udc42(log \ud835\udc37) respectively, impacting the verifier\u2019s time for verifying the proof of opening and the sumcheck protocols. 7.3.2 Overhead of zkAttn. The overhead introduced by zkAttn is parameterized by \ud835\udc3e, the number of segments applied to the input. For an input of size \ud835\udc5a\ud835\udc5bin zkAttn, the total prover overhead, including both running time and memory consumption, is \ud835\udc42(\ud835\udc3e\ud835\udc5a\ud835\udc5b) as per Section 7.3.1. The communication overhead and verifier time are \ud835\udc42(\ud835\udc3e\u221a\ud835\udc5a\ud835\udc5b). In comparison with the bit-decomposition method, which incurs an \u03a9(\ud835\udc5a\ud835\udc5blog2 \ud835\udc35) overhead, zkAttn built upon tlookup achieves an \ud835\udc42 \u0010 \ud835\udc3e log2 \ud835\udc35 \u0011 reduction in prover overhead. Practically, \ud835\udc3eis chosen to be small enough to minimize both error and overhead while avoiding overly large segments (for example, \ud835\udc3e= 1, which would require compiling all possible input-output pairs into a table, leading to an impractical overhead of at least \u03a9(\ud835\udc35) for a unrealistically large total number of possible inputs \ud835\udc35that exceeds \ud835\udc5a\ud835\udc5b, where the previous analysis on overhead would not apply). 7.3.3 Overall overhead. zkLLM benefits from the linear prover overhead of sumcheck protocols and the logarithmic and squareroot verifier overheads of sumchecks and Pedersen commitments, respectively. Specialized sumchecks for tensor operations, like matrix multiplications, achieve less complexity than the computation process itself, further reducing proof overhead for each layer. Assuming prover overhead, communication cost, and verifier overhead per layer of an \ud835\udc3f-layer LLM are \ud835\udc61P, \ud835\udc50, and \ud835\udc61V respectively, the total overheads of zkLLM scale naturally to \ud835\udc42(\ud835\udc3f\ud835\udc61P), \ud835\udc42(\ud835\udc3f\ud835\udc50), and \ud835\udc42(\ud835\udc3f\ud835\udc61V). Where the latter two can be further reduced to \ud835\udc42( \u221a \ud835\udc3f\ud835\udc36) and \ud835\udc42( \u221a \ud835\udc3f\ud835\udc61V) by leveraging the repetitive structure across layers and batching up the commitments [38]. Additionally, unlike univariate polynomial-based ZKP systems that must be serialized, the use of sumcheck protocols over multilinear extensions and the compatible Pedersen commitment scheme in zkLLM allows for highly parallelized proof generation, thereby enabling efficient proof generation in a reasonable time. 8 EXPERIMENTS We developed zkLLM using CUDA, basing it on the CUDA code for the BLS12-381 curve [6] produced by the ec-gpu package [19]. The implementation of sequential verifier tasks, which cannot efficiently utilize CUDA, was adapted from the zkCNN implementation [38] that relies on the mcl package [40]. We evaluated zkLLM for inferences on two classes of open-source LLMs, namely OPT [67] and LLaMa-2 [52], supporting sizes up to 13 billion parameters. For both types of models, our focus was on performing verifiable inferences using the designated models, applying samples with the default sequence length of 2048 from the C4 dataset [45]. Our experiments were conducted with resources including 124.5GB of memory, 12 CPU cores of an AMD EPYC 7413 (2.65 GHz with 128M cache L3), and an NVIDIA A100SMX4 GPU with 40GB of memory, all allocated from a computing node. Throughout our experiments, we consistently set the scaling factor for both data embedding and model parameters at 216. All rescaling operations were integrated with the subsequent activation functions. As detailed in Section 4.2, this integration necessitated the use of multiple tlookups. Specifically, the number of tlookups corresponds to the number of times the input to the activation function requires rescaling, with each tlookup having a size of 216, to avoid excessive memory usage. Additionally, since the input to the Softmax function in zkAttn of each layer undergoes two multiplication operations, the cumulative scaling factor reaches 264. To manage this, we deployed \ud835\udc3e= 5 tlookups, each of size 216. This setup includes \ud835\udc3f= 3 least significant segments, with the remaining two segments accommodating all potential inputs within a scale of 216 and a precision of 2\u221216 when reverted to the real domain. The tolerable error margins were selected in accordance with Section 7.1. This approach resulted in an approximate total L1 error of 10\u22122 on the output, a level comparable to the rounding error induced by half-precision floating points used in state-of-the-art LLMs. Notably, all proofs involved in the results of this section have been successfully validated by the semi-honest verifier. In Table 1, we present detailed information regarding the overhead associated with zkLLM for models of various sizes. The data includes the time required for the prover to commit and hide the model (committing time and commitment sizes), as well as the prover\u2019s time, proof size, and the verifier\u2019s time in response to an input prompt from the latter. As the first endeavor to apply zero-knowledge proofs to LLMs with up to 13 billion parameters, to the best of our knowledge, zkLLM has achieved significant results toward practical and industrial applicability. Once the model is trained, the prover requires up to 20 minutes to commit to the model and subsequently publish a commitment of approximately 10MB for public scrutiny via zkLLM. When prompted by the verifier, the prover is able to generate a proof for the correctness of the entire inference process in less than 15 minutes. Additionally, as the design of zkAttn effectively resolves the inherent bottleneck of listing all input-output pairs, the memory consumption is effectively controlled under 23.1GB, which fits zkLLM into commonly used GPUs in machine learning, like Tesla V100 and A100. It is important to note that while the time for committing generally scales with the size of the parameters, the time for generating proofs scales more slowly. This slower scaling is attributed to the less significant differences in the complexities of intermediate computations and more efficient use of parallel computing resources as the size of the tensors increases. Ultimately, the succinct proof, which is only about 100kB in size, can be verified by the verifier in a matter of seconds. Despite this efficiency, the verifier is provably 11 CCS \u201924, October 14-18, 2024, Salt Lake City, U.S.A. Haochen Sun, Jason Li, and Hongyang Zhang Table 1: The overhead of zkLLM on OPT and LLaMa-2. Model OPT-125M OPT-350M OPT-1.3B OPT-2.7B OPT-6.7B OPT-13B LLaMa-2-7B LLaMa-2-13B Committing time (s) 11.8 33.1 127 273 654 127 103 531 986 Model OPT-125M OPT-350M OPT-1.3B OPT-2.7B OPT-6.7B OPT-13B LLaMa-2-7B LLaMa-2Committing time (s) 11.8 33.1 127 273 654 1.27 \u00d7 103 531 986 Commitment size (MB) 0.996 1.67 3.32 4.58 7.22 10.1 7.97 11.0 Prover time (s) 73.9 111 221 352 548 713 620 803 Proof size (kB) 141 144 147 152 157 160 183 188 Verifier time (s) 0.342 0.593 0.899 1.41 2.08 3.71 2.36 3.95 Memory usage (GB) 1.88 2.38 3.71 6.60 15.0 22.9 15.5 23.1 C4 Perplexity (orig) 26.56 22.59 16.07 14.34 12.71 12.06 7.036 6.520 C4 Perplexity (quant) 26.65 22.66 16.12 14.37 12.73 12.07 7.049 6.528 0.0 2.5 5.0 7.5 10.0 12.5 Number of Parameters (B) .5 5.0 7.5 10.0 Number of Parameters (B) 102 103 104 105 Proving Time (s) zkLLM zkML Figure 4: Comparison between zkLLM and zkML. Results of test cases with OOM errors are estimated and marked in red. unable to glean any additional information about the model or the inference result, ensuring the correctness of the inference while maintaining the confidentiality of the model parameters. Moreover, the numerical error due to the unavoidable discretization of the entire process for the application of the cryptographic tools does not cause significant accuracy drops: on the C4 dataset [45], the increase of perplexity is less than 0.1, and the impact diminishes to less than 0.01 as the sizes of the models scale to 13B. In Figure 4, we further compare zkLLM with zkML [33], the first zero-knowledge proof to have achieved verifiable inference for GPT-2 under identical hardware conditions. Beyond the size of GPT-2 (1.5B parameters), where zkML results in an out-of-memory (OOM) error, we provide an estimation of the required proving time. Thanks to the design tailored for non-arithmetic tensor operations and the attention mechanism prevalent in LLMs, as well as its CUDA implementation, zkLLM extends the zero-knowledge verifiability to LLMs with 10x larger sizes, achieving an approximate 50x speedup. 8.1 Additional experimental results on tlookup and zkAttn To further demonstrate the efficiency of tlookup in addressing nonarithmetic tensor operations in deep learning, as well as zkAttn, which is pivotal for verifiable computations in LLMs, we isolated an instance of zkAttn from the first layer of variously sized OPT models on input sequences of different lengths. We then measured the overhead incurred by zkAttn, including the multiple tlookups it encompasses. The results are presented in Figure 5. It is evident that, in contrast to the overall overhead, the overhead specific to zkAttn is less influenced by the size of the model, particularly regarding proving time. However, the length of the input sequence significantly impacts various aspects of the overhead. This observation can be attributed to the fact that the Attention mechanism, while involving LLM parameters, is more significantly affected by the interactions between intermediate values (e.g., Q, K, V), where their dimensions play a crucial role in determining the overhead. Notably, the largest tested sequence length, 4096, exceeds the original design specifications of OPT models and was primarily included as a reference to assess the impact of sequence length on overhead. In contrast, in Table 1, which documents overall results for zkLLM, we set the sequence length to 2048\u2014the maximum feasible value\u2014to maintain experimental consistency and fairness. This paper introduces zkLLM, marking the inaugural specialized zero-knowledge proof tailored for large language models, as far as our current knowledge extends. zkLLM pioneers zero-knowledge verifiable computations for general non-arithmetic operations within neural networks, ensuring full parallelizability and zero additional overhead through the implementation of tlookup. Building on this foundation, we further present zkAttn, a novel zero-knowledge proof specifically designed for the attention mechanism\u2014a pivotal component underpinning the exceptional performance of modern LLMs. With a CUDA implementation optimized for parallel computing resources in deep learning, zkLLM achieves a groundbreaking milestone as the first study to provide zero-knowledge verifiability for LLMs with 13 billion parameters. This endeavor stands as a significant contribution towards fortifying the legitimacy of LLMs in light of their transformative impact on various domains." + }, + { + "url": "http://arxiv.org/abs/2212.05428v2", + "title": "ezDPS: An Efficient and Zero-Knowledge Machine Learning Inference Pipeline", + "abstract": "Machine Learning as a service (MLaaS) permits resource-limited clients to\naccess powerful data analytics services ubiquitously. Despite its merits, MLaaS\nposes significant concerns regarding the integrity of delegated computation and\nthe privacy of the server's model parameters. To address this issue, Zhang et\nal. (CCS'20) initiated the study of zero-knowledge Machine Learning (zkML). Few\nzkML schemes have been proposed afterward; however, they focus on sole ML\nclassification algorithms that may not offer satisfactory accuracy or require\nlarge-scale training data and model parameters, which may not be desirable for\nsome applications. We propose ezDPS, a new efficient and zero-knowledge ML\ninference scheme. Unlike prior works, ezDPS is a zkML pipeline in which the\ndata is processed in multiple stages for high accuracy. Each stage of ezDPS is\nharnessed with an established ML algorithm that is shown to be effective in\nvarious applications, including Discrete Wavelet Transformation, Principal\nComponents Analysis, and Support Vector Machine. We design new gadgets to prove\nML operations effectively. We fully implemented ezDPS and assessed its\nperformance on real datasets. Experimental results showed that ezDPS achieves\none-to-three orders of magnitude more efficient than the generic circuit-based\napproach in all metrics while maintaining more desirable accuracy than single\nML classification approaches.", + "authors": "Haodi Wang, Thang Hoang", + "published": "2022-12-11", + "updated": "2023-02-01", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.09126v2", + "title": "Verifiable and Provably Secure Machine Unlearning", + "abstract": "Machine unlearning aims to remove points from the training dataset of a\nmachine learning model after training; for example when a user requests their\ndata to be deleted. While many machine unlearning methods have been proposed,\nnone of them enable users to audit the procedure. Furthermore, recent work\nshows a user is unable to verify if their data was unlearnt from an inspection\nof the model alone. Rather than reasoning about model parameters, we propose to\nview verifiable unlearning as a security problem. To this end, we present the\nfirst cryptographic definition of verifiable unlearning to formally capture the\nguarantees of a machine unlearning system. In this framework, the server first\ncomputes a proof that the model was trained on a dataset $D$. Given a user data\npoint $d$ requested to be deleted, the server updates the model using an\nunlearning algorithm. It then provides a proof of the correct execution of\nunlearning and that $d \\notin D'$, where $D'$ is the new training dataset. Our\nframework is generally applicable to different unlearning techniques that we\nabstract as admissible functions. We instantiate the framework, based on\ncryptographic assumptions, using SNARKs and hash chains. Finally, we implement\nthe protocol for three different unlearning techniques (retraining-based,\namnesiac, and optimization-based) to validate its feasibility for linear\nregression, logistic regression, and neural networks.", + "authors": "Thorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, Nicolas Papernot", + "published": "2022-10-17", + "updated": "2023-03-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1909.06961v1", + "title": "VeriML: Enabling Integrity Assurances and Fair Payments for Machine Learning as a Service", + "abstract": "Machine Learning as a Service (MLaaS) allows clients with limited resources\nto outsource their expensive ML tasks to powerful servers. Despite the huge\nbenefits, current MLaaS solutions still lack strong assurances on: 1) service\ncorrectness (i.e., whether the MLaaS works as expected); 2) trustworthy\naccounting (i.e., whether the bill for the MLaaS resource consumption is\ncorrectly accounted); 3) fair payment (i.e., whether a client gets the entire\nMLaaS result before making the payment). Without these assurances, unfaithful\nservice providers can return improperly-executed ML task results or partially\ntrained ML models while asking for over-claimed rewards. Moreover, it is hard\nto argue for wide adoption of MLaaS to both the client and the service\nprovider, especially in the open market without a trusted third party. In this\npaper, we present VeriML, a novel and efficient framework to bring integrity\nassurances and fair payments to MLaaS. With VeriML, clients can be assured that\nML tasks are correctly executed on an untrusted server and the resource\nconsumption claimed by the service provider equals to the actual workload. We\nstrategically use succinct non-interactive arguments of knowledge (SNARK) on\nrandomly-selected iterations during the ML training phase for efficiency with\ntunable probabilistic assurance. We also develop multiple ML-specific\noptimizations to the arithmetic circuit required by SNARK. Our system\nimplements six common algorithms: linear regression, logistic regression,\nneural network, support vector machine, Kmeans and decision tree. The\nexperimental results have validated the practical performance of VeriML.", + "authors": "Lingchen Zhao, Qian Wang, Cong Wang, Qi Li, Chao Shen, Xiaodong Lin, Shengshan Hu, Minxin Du", + "published": "2019-09-16", + "updated": "2019-09-16", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.08674v1", + "title": "Scaling up Trustless DNN Inference with Zero-Knowledge Proofs", + "abstract": "As ML models have increased in capabilities and accuracy, so has the\ncomplexity of their deployments. Increasingly, ML model consumers are turning\nto service providers to serve the ML models in the ML-as-a-service (MLaaS)\nparadigm. As MLaaS proliferates, a critical requirement emerges: how can model\nconsumers verify that the correct predictions were served, in the face of\nmalicious, lazy, or buggy service providers?\n In this work, we present the first practical ImageNet-scale method to verify\nML model inference non-interactively, i.e., after the inference has been done.\nTo do so, we leverage recent developments in ZK-SNARKs (zero-knowledge succinct\nnon-interactive argument of knowledge), a form of zero-knowledge proofs.\nZK-SNARKs allows us to verify ML model execution non-interactively and with\nonly standard cryptographic hardness assumptions. In particular, we provide the\nfirst ZK-SNARK proof of valid inference for a full resolution ImageNet model,\nachieving 79\\% top-5 accuracy. We further use these ZK-SNARKs to design\nprotocols to verify ML model execution in a variety of scenarios, including for\nverifying MLaaS predictions, verifying MLaaS model accuracy, and using ML\nmodels for trustless retrieval. Together, our results show that ZK-SNARKs have\nthe promise to make verified ML model inference practical.", + "authors": "Daniel Kang, Tatsunori Hashimoto, Ion Stoica, Yi Sun", + "published": "2022-10-17", + "updated": "2022-10-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.09186v3", + "title": "pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network Testing", + "abstract": "This paper proposes a new approach for privacy-preserving and verifiable\nconvolutional neural network (CNN) testing, enabling a CNN model developer to\nconvince a user of the truthful CNN performance over non-public data from\nmultiple testers, while respecting model privacy. To balance the security and\nefficiency issues, three new efforts are done by appropriately integrating\nhomomorphic encryption (HE) and zero-knowledge succinct non-interactive\nargument of knowledge (zk-SNARK) primitives with the CNN testing. First, a CNN\nmodel to be tested is strategically partitioned into a private part kept\nlocally by the model developer, and a public part outsourced to an outside\nserver. Then, the private part runs over HE-protected test data sent by a\ntester and transmits its outputs to the public part for accomplishing\nsubsequent computations of the CNN testing. Second, the correctness of the\nabove CNN testing is enforced by generating zk-SNARK based proofs, with an\nemphasis on optimizing proving overhead for two-dimensional (2-D) convolution\noperations, since the operations dominate the performance bottleneck during\ngenerating proofs. We specifically present a new quadratic matrix programs\n(QMPs)-based arithmetic circuit with a single multiplication gate for\nexpressing 2-D convolution operations between multiple filters and inputs in a\nbatch manner. Third, we aggregate multiple proofs with respect to a same CNN\nmodel but different testers' test data (i.e., different statements) into one\nproof, and ensure that the validity of the aggregated proof implies the\nvalidity of the original multiple proofs. Lastly, our experimental results\ndemonstrate that our QMPs-based zk-SNARK performs nearly 13.9$\\times$faster\nthan the existing QAPs-based zk-SNARK in proving time, and 17.6$\\times$faster\nin Setup time, for high-dimension matrix multiplication.", + "authors": "Jiasi Weng, Jian Weng, Gui Tang, Anjia Yang, Ming Li, Jia-Nan Liu", + "published": "2022-01-23", + "updated": "2023-05-28", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.11334v2", + "title": "Proof of Unlearning: Definitions and Instantiation", + "abstract": "The \"Right to be Forgotten\" rule in machine learning (ML) practice enables\nsome individual data to be deleted from a trained model, as pursued by recently\ndeveloped machine unlearning techniques. To truly comply with the rule, a\nnatural and necessary step is to verify if the individual data are indeed\ndeleted after unlearning. Yet, previous parameter-space verification metrics\nmay be easily evaded by a distrustful model trainer. Thus, Thudi et al.\nrecently present a call to action on algorithm-level verification in USENIX\nSecurity'22.\n We respond to the call, by reconsidering the unlearning problem in the\nscenario of machine learning as a service (MLaaS), and proposing a new\ndefinition framework for Proof of Unlearning (PoUL) on algorithm level.\nSpecifically, our PoUL definitions (i) enforce correctness properties on both\nthe pre and post phases of unlearning, so as to prevent the state-of-the-art\nforging attacks; (ii) highlight proper practicality requirements of both the\nprover and verifier sides with minimal invasiveness to the off-the-shelf\nservice pipeline and computational workloads. Under the definition framework,\nwe subsequently present a trusted hardware-empowered instantiation using SGX\nenclave, by logically incorporating an authentication layer for tracing the\ndata lineage with a proving layer for supporting the audit of learning. We\ncustomize authenticated data structures to support large out-of-enclave storage\nwith simple operation logic, and meanwhile, enable proving complex unlearning\nlogic with affordable memory footprints in the enclave. We finally validate the\nfeasibility of the proposed instantiation with a proof-of-concept\nimplementation and multi-dimensional performance evaluation.", + "authors": "Jiasi Weng, Shenglong Yao, Yuefeng Du, Junjie Huang, Jian Weng, Cong Wang", + "published": "2022-10-20", + "updated": "2022-10-21", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.14716v1", + "title": "Bayesian Example Selection Improves In-Context Learning for Speech, Text, and Visual Modalities", + "abstract": "Large language models (LLMs) can adapt to new tasks through in-context\nlearning (ICL) based on a few examples presented in dialogue history without\nany model parameter update. Despite such convenience, the performance of ICL\nheavily depends on the quality of the in-context examples presented, which\nmakes the in-context example selection approach a critical choice. This paper\nproposes a novel Bayesian in-Context example Selection method (ByCS) for ICL.\nExtending the inference probability conditioned on in-context examples based on\nBayes' theorem, ByCS focuses on the inverse inference conditioned on test\ninput. Following the assumption that accurate inverse inference probability\n(likelihood) will result in accurate inference probability (posterior),\nin-context examples are selected based on their inverse inference results.\nDiverse and extensive cross-tasking and cross-modality experiments are\nperformed with speech, text, and image examples. Experimental results show the\nefficacy and robustness of our ByCS method on various models, tasks and\nmodalities.", + "authors": "Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.SD", + "eess.AS" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Multimodal ICL. Inspired by the decoder-only ICL in text-based NLP, efforts have been made to extend such a few-shot learning ability to other modalities, in particular image and audio. Frozen (Tsimpoukelli et al., 2021) is the first attempt to exploit ICL ability in the vision-language model (VLM). By using a vision encoder to map the input image to textual tokens in the input embedding space of a frozen text language model, Frozen can handle interleaved image and text input and achieve image-text ICL. Other work manages to improve VLM\u2019s ICL ability by using adapter blocks (Eichenberg et al., 2022), adding blockwise modality fusion structures (Alayrac et al., 2022) and scaling up the model size (Sun et al., 2023a). In audio modality, Borsos et al. (2023) proposed AudioLM, a language model based on quantised audio tokens for audio generation tasks, which exhibits ICL ability for audio continuation. Similarly, Speech example inputs Speech test input Text example labels Answer \u201c\u597d\u7747\u3002\u201d \ud835\udc4b \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc4c Text example inputs Text test input Answer Albert Einstein was Marie Curie was Polish. \ud835\udc4c \ud835\udc4b \ud835\udc36!\"#$% Text example labels \ud835\udc36&'()& German. \u201c\u7747\u569f\u3002\u201d Image example inputs Text example inputs \ud835\udc36!\"#$% Text example labels Image test input Text test input Answer \ud835\udc36&'()& \ud835\udc4b \ud835\udc4c Does this type of train transport people or cargo? What is this vehicle used for? Transporting goods. Cargo. (a) text ICL (b) ASR ICL (c) VQA ICL Figure 2: Multimodal ICL. Although ICL on different modalities shares the same formula expression, the actual inputs and inference model architectures differ. For ASR ICL on Whisper, the speech is fed into the encoder while the text example is labelled into the decoder, which is aware of speech input through cross-attention with the encoder. For VQA ICL, images are first encoded to the same embedding space of LM\u2019s input, then interleaved images and texts are fed into decoder LM. Wang et al. (2023a) proposed VALL-E, a controllable text-to-speech synthesis system with ICL ability based on audio and text prompts. Wang et al. (2023c) presented the first ICL work for ASR based on paired speech-text examples, which adapted the Whisper (Radford et al., 2023) model to receive considerable word error rate (WER) reductions on unseen Chinese dialects. Further explorations enabled the recent speech-language models to perform ICL on more speech input tasks through warmup training (Hsu et al., 2023) or speech instruction-tuning (Pan et al., 2023). In-Context Example Selection Methods. Rubin et al. (2022) proposed a scoring LM to retrieve incontext examples using contrastive learning, which can also be trained with reinforced learning algorithms, such as Q-learning (Zhang et al., 2022) and policy gradient (Lu et al., 2022a). Alternatively, examples that are semantically similar to the test input can be selected. Liu et al. (2022) proposed to select the k nearest neighbours (kNN) in the embedding space of the examples. When combining with chain-of-thought (Wei et al., 2022), Qin et al. (2023) proposed to select examples in the embedding space of the reasoning path. LLM feedback is often used in in-context example selection. Iter et al. (2023) selected in-context examples with cross-entropy differences of the fine-tuned model \ud835\udc36 \"!\"#$! = arg max \ud835\udc43(\ud835\udc36!\"#$!|\ud835\udc7f, \ud835\udc80 /, \ud835\udc36%&'()) \ud835\udc7f \ud835\udc80 # \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc36 $&'()& Text similarity measurement Example Score \ud835\udc44 Select examples with max(\ud835\udc78) \ud835\udc4c 3 = arg max \ud835\udc43(\ud835\udc80|\ud835\udc36%&'(), \ud835\udc36!\"#$!, \ud835\udc7f) \ud835\udc44= \ud835\udc46\ud835\udc56\ud835\udc5a\ud835\udc56\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66(\ud835\udc36!\"#$!, \ud835\udc36 \"!\"#$!) First-round inference Inverse inference \u2460 \u2461 \u2462 Figure 3: The detailed pipeline of our ByCS method includes: First, conduct the first-round inference to estimate the label of the test input. Then, perform inverse inference on each example in the datastore, where the test input and the estimated label serve as in-context examples. A detailed illustration of inverse inference can be found in Figure 5 in the Appendix. Finally, rank in-context examples by the text similarity between the inverse inference result and the true context label. Examples with high similarity scores are selected due to their high mutual information interaction. based on the assumption that ICL may act as implicit gradient descent (Dai et al., 2022). Nguyen and Wong (2023) identified highly impactful examples according to the proposed influence score. Although ByCS also uses LLM feedback when evaluating the quality of in-context examples through inverse inference, it leverages the text-similarity of the inverse inference results and the corresponding ground-truth labels, in no need of complete output probability distributions which are often not available for commercial LLMs. Wang et al. (2023d) selected optimal in-context examples in the Bayesian framework by viewing LLMs as latent variable models and ICL as latent concept learning. In comparison, ByCS directly extends the ICL inference probability using Bayes\u2019 theorem. Xu and Zhang (2024) selected examples with high discrepancy between the labels and LLM\u2019s outputs when performing question answering. ByCS also selected examples from candidates in a datastore based on LLM\u2019s outputs but computes the mutual information interactions between the in-context examples and test input.", + "pre_questions": [], + "main_content": "Introduction Large language models (LLMs) (Touvron et al., 2023b; OpenAI, 2023a) have achieved great success on many text-based natural language processing (NLP) tasks. By connecting with extra visual and audio encoders (Sun et al., 2023b; Radford et al., 2023), the resulting multimodal LLMs can also achieve remarkable performance on imagetext and audio-text tasks (Li et al., 2023; OpenAI, 2023b; Tang et al., 2023). With the ability of incontext learning (ICL) (Brown et al., 2020), LLMs can adapt to new tasks easily and efficiently in a training-free manner, to generate output following the prompting paradigm based on a few input-label pairs pre-pended to the test input. The existence of ICL ability has also been verified on image-text and audio-text tasks (Tsimpoukelli et al., 2021; Wang et al., 2023c; Hsu et al., 2023; Pan et al., 2023). (i) Random Selected Example(s) (ii) Inverse Inference (iii) Bayesian Selected Example(s) text similarity score-based reranking estimated probabilities datastore (few-shot with k samples) (k samples in-context learning) Figure 1: A brief illustration of the proposed Bayesian in-context example selection includes: (i) first randomly selecting k examples; (ii) examining the examples in the datastore through \u201cinverse inference,\u201d where the test input-label pair serves as the in-context example; and (iii) selecting samples with correct label predictions as good examples (colored in blue), considered to have high mutual information interaction with the test input. Although ICL requires no gradient descent and thus does not suffer from the instability caused by stochastic optimisation compared to other testtime adaptation approaches, care still needs to be taken when selecting the in-context examples since they often lead to distinct ICL performance variations (Zhao et al., 2021; Min et al., 2022; Lu et al., 2022b). Prior work on in-context example selection trains an example retrieval module (Rubin et al., 2022; Zhang et al., 2022; Lu et al., 2022a; Wang et al., 2023b), selects close examples in embedding space (Liu et al., 2022; An et al., 2023; Qin et al., 2023), or leverages the feedback of LLMs to score the examples (Su et al., 2022; Nguyen and Wong, 2023; Iter et al., 2023; Mavromatis et al., 2023). While boosting ICL performance, most methods treat in-context examples and test input separately, overlooking their mutual interactions. This paper proposes ByCS (Bayesian in-Context example Selection), a novel in-context example selection approach focusing on mutual information interactions based on the Bayesian formula. Refer to the inference of test input conditioned on in-context examples as ICL inference, and the inference of in-context example\u2019s input based on the test input-label pair as the inverse inference. arXiv:2404.14716v1 [cs.CL] 23 Apr 2024 By introducing inverse inference via Bayes\u2019 theorem, ByCS leverages the inverse inference result to evaluate the quality of each in-context example. Assuming the contextual information interaction is mutual, an accurate inverse inference is likely to result in an accurate inference. Examples with accurate inverse inference results are selected as optimal examples. Extensive experiments across audio, image, and text modalities are conducted to verify the effectiveness and robustness of ByCS, such as ASR, visual question answering (VQA), as well as NLP tasks (including topic classification, sentiment analysis, and text-to-SQL etc). Our main contributions are summarised as follows: \u2022 ByCS, a novel in-context example selection method inspired by Bayes\u2019 theorem, is proposed. To improve the efficiency, the use of a smaller model for fast inverse inference implementation and a ranking-based pre-selection to reduce the number of in-context examples are also proposed in this paper. \u2022 The method is verified using both \u201cdecoderonly ICL\" on NLP tasks and \u201cencoderdecoder\u201d ICL on ASR and VQA. To the best of our knowledge, this is the first work of an in-context example selection method verified across text, audio, and visual modalities as shown in Figure 2. As shown in Figure 3, given a test input X and paired in-context examples (Cinput, Clabel), LLMs predict the most possible answer \u02c6 Y by maximising the inference probability P(Y|Cinput, Clabel, X): \u02c6 Y = arg max P(Y|Cinput, Clabel, X), (1) where Cinput and Clabel are the inputs and labels of different data types in different tasks. Regarding text-based NLP tasks, Cinput and Clabel are referred to as text questions and corresponding answers. Regarding ASR, Cinput and Clabel are speech audio and corresponding text transcriptions. Regarding VQA, Cinput are images and text questions based on the images and Clabel are the text answers. The inference probability can be extended using Bayes\u2019 theorem: P(Y|Cinput, Clabel, X) P(C|X, Y, C |C C = P(Clabel|X, Y, Cinput)P(Y|X, Cinput) P(Clabel|X, Cinput) X, Y, Cinput)P(Y|X, Cinput) P(Clabel|X, Cinput) . (2) d is termed C| C The likelihood P(Clabel|X, Y, Cinput) is termed as inverse inference probability, since it can be interpreted as the probability of the context label Clabel when the test input-label pair (X, Y) is inversely treated as the in-context example. ByCS is focused on the inverse inference probability and assumes the influence of the prior P(Y|X, Cinput) is subordinate for simplification. In practice, since the ground-truth label Yref of the test input X is not available, the correct likelihood P(Clabel|X, Yref, Cinput) is approximated by P(Clabel|X, \u02c6 Y, Cinput), where \u02c6 Y is produced by the first-round inference. Specifically, \u2022 First, the first-round inference is performed to produce a hypothesized label \u02c6 Y based on the test input X, which can be achieved using decoding rule without any in-context examples by \u02c6 Y = arg max P(Y|X). Better performance can be achieved when using the hypothesized label obtained by in-context examples by \u02c6 Y = arg max P(Y| \u02dc Cinput, \u02dc Clabel, X) based on Eqn. (1), where ( \u02dc Cinput, \u02dc Clabel) is a pair of first-round in-context example selected either randomly or using other example selection methods. \u2022 Next, for the datastore with all candidate incontext examples, generate the inverse inference result in \u02c6 Clabel for every candidate example based on the approximated inverse inference probability P(Clabel|X, \u02c6 Y, Cinput) by \u02c6 Clabel = arg max P(Clabel|X, \u02c6 Y, Cinput). \u2022 Last, compute Q = Similarity(Clabel, \u02c6 Clabel) as the text similarity between Clabel and \u02c6 Clabel, and use Q as the metric for the evaluation of the quality of inverse inference. Since more accurate inverse inference probability often results in higher text similarity, ByCS selects the in-context examples with higher Q. Note that Q is adopted since it does not require to assessment of the model\u2019s output probability distribution of the LLM, which is often unavailable for commercial LLMs. To reduce the computation cost of inverse inference, two methods are used when the number of examples in the datastore is large: \u2022 Conduct inverse inference using a model in the same model family as our inference model but has a smaller model size. \u2022 Apply ByCS to a small number (e.g. N) of pre-selected candidate examples. In preselection, all examples in the datastore are first ranked, and only the top N best examples are reserved as the pre-selected candidates. The pre-selection is performed using fast rankingbased algorithms like kNN. 4 Experimental Setup 4.1 Models Experimental results are performed on audio, text, and image modalities. For audio-text and imagetext tasks, ASR and VQA are used to evaluate the ICL ability of encoder-decoder structured models. For text-only NLP tasks, topic classification, sentiment analysis, and text-to-SQL are used to evaluate the ICL performance with decoder-only models. Regarding the NLP tasks, experiments are conducted using GPT-3.5-Turbo and GPT-4 (OpenAI, 2023a). For the ASR task, the open-sourced Whisper model (Radford et al., 2023) is used, which is a series of speech models released by OpenAI. The Whisper model family uses vanilla encoderdecoder Transformer (Vaswani et al., 2017) architecture ranging from 39 million (M) parameters (tiny) to 1.55 billion (B) parameters (large). Specifically, the Whisper small (244M) and Whisper largev2/-v3 (1.55B) models are used. For the VQA task, experiments are performed on Emu2 (Sun et al., 2023a) and GPT-4V (OpenAI, 2023b). Emu2 is a 37B text-image model (VLM) which leverages pretrained EVA-02-CLIP-E-plus (Sun et al., 2023b) and LLAMA-33B (Touvron et al., 2023a), which has ICL ability when taking interleaved inputs of images and texts. For experiments on Emu2, the outputs are generated using a greedy decoding setting for fast evaluation. GPT-4V is a GPT4 variant that can directly perceive image inputs, showing state-of-the-art image understanding performance. 4.2 Datasets Seven datasets covering NLP, ASR and VQA are used in this paper. For text-only ICL, four datasets are used in four different task categories: the TREC dataset for topic classification (Voorhees and Tice, 2000), the SST2 dataset for sentiment analysis (Socher et al., 2013), the Spider dataset for text-to-SQL (Yu et al., 2018), and the CHiME4 (Vincent et al., 2017) split of the HyPoradise dataset (Chen et al., 2023) for generative language model re-scoring to correct pre-generated ASR transcriptions. For audio-text ICL, Two datasets are used for ASR tasks, namely RASC863 (ChineseLDC.org, 2004) and CORAAL (Gunter et al., 2021). RASC863 is a commonly used Chinese dialect ASR dataset and its dialectal words split of Chongqing and Guangzhou dialects are used. CORAAL is an English corpus with speech recordings from regional African Americans. For imagetext ICL, VQA experiments are conducted on OKVQA (Marino et al., 2019), a dataset that requires methods to draw upon external knowledge to answer the visual questions. 4.3 Baselines On all three modalities, random selection and improved KATE (Liu et al., 2022) are used as baseline approaches. For random selection, in-context examples are uniformly selected from the example datastore three times and the average results are reported. For KATE (Liu et al., 2022), k neighbours that are nearest to the test input in the embedding space in terms of Euclidean distance are selected. For ASR ICL, the encoder of Whisper large-v2 acts as the embedding retrieval module on the Chinese dataset, while on the English dataset, we use the encoder of Whisper large-v3. In text-ICL, OpenAI text-embedding-ada-002 is used as the embedding retrieval model. For VQA ICL, KATE is only based on the embedding space of the query Corpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 67.1 56.1 52.7 51.0 61.7 38.3 31.2 28.8 12.4 KATE+ 67.1 54.7 51.3 49.7 61.3 36.1 26.9 24.8 12.0 ByCS 62.4 53.4 50.6 48.6 49.5 31.9 27.1 26.6 11.7 oracle ByCS 62.4 52.4 49.5 47.2 49.4 30.7 25.8 24.7 11.7 (a) Results with Whisper-large-v2 Corpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 68.9 60.3 57.0 55.7 67.1 42.8 38.3 35.2 11.6 KATE+ 68.1 58.2 54.8 54.1 67.7 41.3 34.3 31.6 11.4 ByCS 63.5 56.3 53.5 51.8 50.7 36.7 33.0 31.5 11.3 oracle ByCS 63.4 55.2 53.0 50.7 51.3 35.6 31.9 30.7 11.2 (b) Results with Whisper-large-v3 Table 1: %WERs on RASC863 dialectal word dataset and CORAAL with different in-context example selection methods. For RASC863, the example datastore is the RASC863 dialectal word dataset of the corresponding dialect. For CORAAL, the size of the example datastore for ByCS is narrowed down to 10 using kNN algorithm. For the \u201coracle ByCS\u201d setting, the ground-truth label Yref is used in the inverse reference. image and EVA02-CLIP-bigE-14-plus (Sun et al., 2023b) serves as the embedding retrieval module. We use the term \u201cKATE+\u201d to refer to the baseline in our paper, putting stress on the fact that it is actually an improved KATE version enhanced using stronger embedding retrieval models, which results in better performance. For text ICL, bm25 (Robertson et al., 1995) and LLM-R (Wang et al., 2023b) are also compared as baselines. bm25 is a ranking metric originally designed for search engines to estimate the relevance of documents to a given query based on word-overlapping similarity. LLM-R provides a recent and preferment dense retriever distilled using a reward model trained based on LLM feedback. 5 Results 5.1 ASR ICL Results in WER are reported for ASR tasks in Table 1, and here in Chinese WER is calculated based on Chinese characters, which is also termed as character error rate. The ByCS method outperforms the KATE+ baseline in most cases, showing the robustness and effectiveness of our method. When the number of in-context examples k is small, ByCS surpasses KATE+ baseline in a large margin, with a 10.25% relative WER reduction on average when k = 1. Such performance advantage of ByCS reduces when the number of in-context examples increases, which may be attributed to the fact that ByCS performs the inverse inference of each in-context example individually by applying an independence assumption that ignores the contextual interactions between different in-context examples. The use of Yref in \u201coracle ByCS\u201d further boosts the performance gain, indicating the upper bound of our method with the same number of k. 5.2 Ablation study on ASR ICL 5.2.1 Inverse decoding option The influence of different decoding options of inverse inference is studied on the RASC863 dialectal word dataset. The results are shown in Table 2. For the setting notation, \u201cnoprompt\u201d denotes decoding in the default decoding option, and \u201cprompt\u201d means to decode with a specially designed prompt \u201c\u8bc6\u522b\u65b9\u8a00\u201d (meaning to \u201crecognize dialect speech\u201d). \u201cLID\u201d denotes decoding with the correct language identity of Chinese (\u201czh\u201d). The results show that among the three inverse decoding options, \u201cnoprompt\u201d obtains the best performance, \u201cprompt\u201d becomes the second, and \u201cLID\u201d the worst. The WERs of inverse inference are reported in Table 3. The WERs under the \u201cnoprompt\u201d setting are more than 100% due to the high insertion error rate. The repeated outputs are not removed when calculating the WERs of inverse inference and when calculating the text similarity, making a more obvious distinction between the examples with high mutual information interaction and those with low. Although it may be a little counter-intuitive that low inverse inference accuracy results in high ByCS selection performance, it is reasonable since inverse inference in ByCS helps to separate good in-context examples from the rest, which can be better achieved by using worse decoding options during inverse inference. This is because our decoding options can often make the model make more mistakes for worse in-context examples. Setting Corpus Text Inverse RASC863 Chongqing RASC863 Guangzhou similarity decoding measurement option Jaccard coefficient noprompt 62.4 49.5 prompt 62.9 50.7 LID 64.1 52.3 BERT wordvecs noprompt 62.4 51.5 prompt 63.5 56.8 LID 64.5 57.7 Table 2: %WERs of Whisper large-v2 on RASC863 dialectal word dataset using ByCS method with different inverse decoding options and text similarity measurements. The number of in-context examples is k = 1. Inverse decoding option Corpus RASC863 Chongqing RASC863 Guangzhou noprompt 91.5 125.2 prompt 70.2 70.1 LID 54.6 61.7 Table 3: Inverse inference %WERs of Whisper largev2 on RASC863 dialectal word dataset with different inverse decoding options. 5.2.2 Text similarity measurement The results of ByCS with different text similarity measurements are also reported in Table 2. For the setting notation, the \u201cJaccard coefficient\u201d is a comSetting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 67.1 54.7 51.3 49.7 ByCSlargev2 62.4 53.4 50.6 48.6 ByCSsmall 64.2 53.3 50.5 48.7 (a) Results with Whisper large-v2 Setting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 68.1 58.2 54.8 54.1 ByCSlargev3 63.5 56.3 53.5 51.8 ByCSsmall 64.4 56.5 54.1 51.7 (b) Results with Whisper large-v3 Table 4: %WERs on RASC863 Chongqing dialectal word dataset with ByCS with different inverse inference models. ByCSlargev3 and ByCSsmall use Whisper-largev3 and Whisper-small as the inverse inference model separately. monly used statistic to gauge similarity, defined as the intersection over the union of two sentences. \u201cBERT wordvecs\u201d is to measure similarity based on the Euclidean distance in the embedding space of BERT encoded word vectors. The embedding retrieval module is bert-base-chinese 1. ByCS with the Jaccard coefficient as text similarity have lower WERs, which may be because the training data of the BERT model doesn\u2019t include sufficient dialectal Chinese words and expressions. It also indicates that ByCS can work well with even a simple rule-based text similarity measurement, further verifying its high robustness. The Jaccard coefficient is used as the text similarity measurement in later experiments unless explicitly specified, due to the performance and simplicity. 5.2.3 Inverse inference model The inverse inference with different models is also investigated, with the results displayed in Table 4. A smaller model is used for inverse inference to speed up ByCS, since it is expensive to perform inverse inference using the inference model for every candidate example in datastore. Replacing Whisper-large-v2/v3 with Whisper-small will speed up six times2. For the notation, the subscript denotes the inverse inference model. For example, ByCSsmall is the ByCS method with Whisper small 1https://huggingface.co/ bert-base-chinese 2https://github.com/openai/whisper Corpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 63.0 92.92 67.41 8.0 random 63.5 72.7 75.3 94.96 94.80 67.02 7.5 7.5 7.3 KATE+ 78.8 86.4 91.0 95.05 94.69 69.44 7.7 7.1 6.8 bm25 74.6 89.4 89.8 95.27 95.40 67.41 7.4 7.5 8.1 LLM-R 78.0 88.8 90.4 95.05 94.02 67.82 7.4 6.9 7.0 ByCS 81.2 88.0 90.6 95.16 95.04 69.63 7.1 6.8 6.4 (a) Results using GPT-3.5-Turbo Corpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 75.2 95.01 69.63 11.6 random 81.3 82.5 84.6 96.38 96.11 70.66 6.9 6.8 6.5 KATE+ 88.2 91.6 93.4 96.43 95.85 71.95 7.0 6.3 5.8 bm25 81.8 87.4 91.4 96.19 96.09 71.47 6.8 6.6 6.3 LLM-R 88.2 91.0 93.6 95.74 95.06 72.63 6.8 6.3 5.9 ByCS 88.6 92.4 93.6 96.55 96.31 72.82 6.7 6.3 5.9 (b) Results using GPT-4 Table 5: Results of four text ICL tasks on two GPT-family models with different in-context example selection methods. The evaluation metrics are denoted in the brackets. The example datastore is narrowed down to a small size using kNN for ByCS. In the \u2018default\u2019 setting, the answers are generated directly with the questions without ICL. as an inverse inference model. ByCSsmall has similar results to ByCSlargev2 and ByCSlargev3, verifying the effectiveness of using a smaller model from the same family for inverse inference. This is intuitive since Whisper-small is trained using the same data and settings compared to the inference model Whisper-large-v2 and Whisper-large-v3, which therefore processes information similarly and can serve as a good alternative when evaluating the quality of the in-context examples. The smaller size of Whisper-small makes ByCS a more practical method in cost-sensitive scenarios. 5.3 Text ICL Text-only ICL results are shown in Table 5. As shown, ByCS outperforms all baselines on most dataset settings, showing not only the effectiveness but also the robustness of ByCS. In particular, ByCS outperforms the best baseline on the generative ASR rescoring dataset HyPoradise with a considerable 4.7% relative WER reduction with GPT3.5-Turbo. On TREC and SST2 datasets, ByCS does not always outperform the baselines. This indicates that ByCS is more suitable for open-ended long-answer datasets due to the calculation of text similarity in ByCS, in which answers are much more diverse and examples with rich information interactions can be better separated. In contrast, in multi-choice classification datasets, only a few short answers are often available, containing little contextual information. As the example shown in Figure 4, the distribution of the text similarity for ranking the examples is often sharp, merging the optimal and the suboptimal examples. Furthermore, considering the hypothesized labels of the test inputs for inverse inference, the hypothesized answers in open-ended datasets (in the form of long sentences) are often more similar to their corresponding references compared to those in the multi-choice classification datasets (in the form of a word or phrase or just an index of choice). It is observed that different in-context example selection methods perform differently with different models, even though on the same dataset. The bm25 method outperforms the KATE+ method with GPT-3.5-Turbo on the SST2 dataset, but not with GPT4. Compared to KATE+ and bm25 that is model-free in the actual selection step, the performance advantage of ByCS is more consistent since it takes into account the influence of the model. The outputs of the inverse inference model are used, which can serve as a good approximation to the inference model as verified in Section 5.2.3. Note that for ByCS on GPT-4, although the inverse inference procedure is conducted on GPT-3.5Turbo, the performances of ByCS are still superior. This further verifies that smaller models from the same model family can serve as a good low-cost approximation of the inverse inference model. (a) Distribution on SST2 (b) Distribution on HyPoradise Figure 4: The distribution of text similarity scores on different datasets. The text similarity score is the Jaccard coefficient. The entropy of distribution is calculated and placed on the upper left. The distribution on the multichoice classification dataset SST2 (blue) is much sharper than that of the open-ended dataset HyPoradise (red). 5.4 VQA ICL ByCS is tested on VQA ICL and the results are reported in Table 6. ByCS outperforms the KATE+ baseline on the VQA ICL task, demonstrating strong performances across modalities. The performance improvement from ByCS is not as obvious as in audio and text tasks, since the answers of VQA are usually short (usually a word or phrase), lacking sufficient contextual information. ByCS on In-context example number k Example selection method KATE+ ByCS k = 2 40.47 40.12 k = 4 45.11 45.14 (a) Results with Emu-2 In-context example number k Example selection method KATE+ ByCS k = 2 52.54 52.86 k = 4 54.00 54.39 (b) Results with GPT-4V Table 6: Results of VQA ICL with different in-context example selection methods and numbers of examples on OKVQA dataset. the VQA dataset suffers from the problem of having sharp text similarity score distributions, similar to the multichoice classification dataset. For ByCS with GPT-4V, inverse inference results on Emu-2 are used to pre-select the candidate examples, and ByCS still outperforms the KATE+ baseline. The performance may be further improved if GPT-4V is also used for inverse inference. This demonstrates that ICL may perform similarly cross models not only on speech and text, but also on images. 6 Conclusion This paper proposes ByCS, a novel in-context example selection method based on Bayes\u2019 theorem, which assumes that contextual information interaction is mutual between the test input and in-context examples and selects high-quality examples based on the inverse inference results. Experiments are performed across three modalities: speech, text, and images, using six different tasks and seven datasets. Results demonstrated the robustness and effectiveness of ByCS. It is also validated that the inverse inference results can be approximated using a smaller model from the same model family, which considerably reduces the computational cost. Moreover, relying on text similarity to rank in-context examples, ByCS is more suitable for open-ended long-answer datasets which contain sufficient contextual information. Future work is to extend the inverse inference to sequences with multiple incontext examples to model the interactions among the in-context examples. Limitations There are two limitations to this work. First, ByCS follows the simple assumption that the influence of each in-context example is independent and treats each in-context example individually, which neglects the contextual interactions between incontext examples. The approximation may be not adapted to the scenario in which the number of in-context examples is high. Another limitation is that sufficient contextual diversity is required by ByCS to select optimal examples for it depends on text similarity to evaluate inverse inference results. ByCS may suffer performance penalty when applied to a short-answer dataset. Future work includes enhancing ByCS in more scenarios. Ethics Statement The work doesn\u2019t give rise to any ethical risks and issues. All the models and data used in this paper are publicly accessible and used under licenses." + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.13286v1", + "title": "Generative Multimodal Models are In-Context Learners", + "abstract": "The human ability to easily solve multimodal tasks in context (i.e., with\nonly a few demonstrations or simple instructions), is what current multimodal\nsystems have largely struggled to imitate. In this work, we demonstrate that\nthe task-agnostic in-context learning capabilities of large multimodal models\ncan be significantly enhanced by effective scaling-up. We introduce Emu2, a\ngenerative multimodal model with 37 billion parameters, trained on large-scale\nmultimodal sequences with a unified autoregressive objective. Emu2 exhibits\nstrong multimodal in-context learning abilities, even emerging to solve tasks\nthat require on-the-fly reasoning, such as visual prompting and object-grounded\ngeneration. The model sets a new record on multiple multimodal understanding\ntasks in few-shot settings. When instruction-tuned to follow specific\ninstructions, Emu2 further achieves new state-of-the-art on challenging tasks\nsuch as question answering benchmarks for large multimodal models and\nopen-ended subject-driven generation. These achievements demonstrate that Emu2\ncan serve as a base model and general-purpose interface for a wide range of\nmultimodal tasks. Code and models are publicly available to facilitate future\nresearch.", + "authors": "Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Zhengxiong Luo, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, Xinlong Wang", + "published": "2023-12-20", + "updated": "2023-12-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2101.06804v1", + "title": "What Makes Good In-Context Examples for GPT-$3$?", + "abstract": "GPT-$3$ has attracted lots of attention due to its superior performance\nacross a wide range of NLP tasks, especially with its powerful and versatile\nin-context few-shot learning ability. Despite its success, we found that the\nempirical results of GPT-$3$ depend heavily on the choice of in-context\nexamples. In this work, we investigate whether there are more effective\nstrategies for judiciously selecting in-context examples (relative to random\nsampling) that better leverage GPT-$3$'s few-shot capabilities. Inspired by the\nrecent success of leveraging a retrieval module to augment large-scale neural\nnetwork models, we propose to retrieve examples that are semantically-similar\nto a test sample to formulate its corresponding prompt. Intuitively, the\nin-context examples selected with such a strategy may serve as more informative\ninputs to unleash GPT-$3$'s extensive knowledge. We evaluate the proposed\napproach on several natural language understanding and generation benchmarks,\nwhere the retrieval-based prompt selection approach consistently outperforms\nthe random baseline. Moreover, it is observed that the sentence encoders\nfine-tuned on task-related datasets yield even more helpful retrieval results.\nNotably, significant gains are observed on tasks such as table-to-text\ngeneration (41.9% on the ToTTo dataset) and open-domain question answering\n(45.5% on the NQ dataset). We hope our investigation could help understand the\nbehaviors of GPT-$3$ and large-scale pre-trained LMs in general and enhance\ntheir few-shot capabilities.", + "authors": "Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen", + "published": "2021-01-17", + "updated": "2021-01-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.06301v1", + "title": "Misconfidence-based Demonstration Selection for LLM In-Context Learning", + "abstract": "In-context learning with large language models (LLMs) excels at adapting to\nvarious tasks rapidly. However, its success hinges on carefully selecting\ndemonstrations, which remains an obstacle in practice. Current approaches to\nthis problem either rely on hard-to-acquire external supervision or require\nfrequent interactions with LLMs, resulting in high costs. We propose a new\nmethod called In-Context Reflection (ICR) to overcome these challenges. ICR\nstrategically selects demonstrations to reduce the discrepancy between the\nLLM's outputs and the actual input-output mappings. Specifically, ICR starts\nwith a random set of initial demonstrations, then iteratively refines it. In\neach step, it analyzes a pool of candidate examples and identifies the ones\nmost likely to challenge the LLM's current understanding, measured by a new\nmetric called misconfidence. These most confusing examples are then selected to\nreplace the less informative demonstrations in the current set. Our\ncomprehensive evaluation across five diverse datasets encompassing 13 subtasks\nshows the efficacy of ICR. Compared to existing methods, ICR achieves an\naverage performance boost of 4%, while demonstrating remarkable cross-task\ngeneralization capabilities.", + "authors": "Shangqing Xu, Chao Zhang", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.11042v2", + "title": "In-context Example Selection with Influences", + "abstract": "In-context learning (ICL) is a powerful paradigm emerged from large language\nmodels (LLMs). Despite its promises, ICL performance is known to be highly\nsensitive to input examples. In this work, we use $\\textit{in-context\ninfluences}$ to analyze few-shot ICL performance directly from the in-context\nexamples. Our proposed influence-based example selection method can identify\nboth positive and negative examples, outperforming several baselines when\nevaluated on 9 SuperGLUE tasks. Our analysis uncovers up to a $16.3\\%$\nperformance gap between using the most negative in-context examples compared to\nthe most positive. In a case study, we apply our influence-based framework to\nquantify the phenomena of recency bias in example ordering for few-shot ICL.", + "authors": "Tai Nguyen, Eric Wong", + "published": "2023-02-21", + "updated": "2023-06-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.14610v3", + "title": "Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning", + "abstract": "Mathematical reasoning, a core ability of human intelligence, presents unique\nchallenges for machines in abstract thinking and logical reasoning. Recent\nlarge pre-trained language models such as GPT-3 have achieved remarkable\nprogress on mathematical reasoning tasks written in text form, such as math\nword problems (MWP). However, it is unknown if the models can handle more\ncomplex problems that involve math reasoning over heterogeneous information,\nsuch as tabular data. To fill the gap, we present Tabular Math Word Problems\n(TabMWP), a new dataset containing 38,431 open-domain grade-level problems that\nrequire mathematical reasoning on both textual and tabular data. Each question\nin TabMWP is aligned with a tabular context, which is presented as an image,\nsemi-structured text, and a structured table. There are two types of questions:\nfree-text and multi-choice, and each problem is annotated with gold solutions\nto reveal the multi-step reasoning process. We evaluate different pre-trained\nmodels on TabMWP, including the GPT-3 model in a few-shot setting. As earlier\nstudies suggest, since few-shot GPT-3 relies on the selection of in-context\nexamples, its performance is unstable and can degrade to near chance. The\nunstable issue is more severe when handling complex problems like TabMWP. To\nmitigate this, we further propose a novel approach, PromptPG, which utilizes\npolicy gradient to learn to select in-context examples from a small amount of\ntraining data and then constructs the corresponding prompt for the test\nexample. Experimental results show that our method outperforms the best\nbaseline by 5.31% on the accuracy metric and reduces the prediction variance\nsignificantly compared to random selection, which verifies its effectiveness in\nselecting in-context examples.", + "authors": "Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan", + "published": "2022-09-29", + "updated": "2023-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.11916v4", + "title": "Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning", + "abstract": "In recent years, pre-trained large language models (LLMs) have demonstrated\nremarkable efficiency in achieving an inference-time few-shot learning\ncapability known as in-context learning. However, existing literature has\nhighlighted the sensitivity of this capability to the selection of few-shot\ndemonstrations. Current understandings of the underlying mechanisms by which\nthis capability arises from regular language model pretraining objectives\nremain disconnected from the real-world LLMs. This study aims to examine the\nin-context learning phenomenon through a Bayesian lens, viewing real-world LLMs\nas latent variable models. On this premise, we propose an algorithm to select\noptimal demonstrations from a set of annotated data with a small LM, and then\ndirectly generalize the selected demonstrations to larger LMs. We demonstrate\nsignificant improvement over baselines, averaged over eight GPT models on eight\nreal-world text classification datasets. We also demonstrate the real-world\nusefulness of our algorithm on GSM8K, a math word problem dataset. Our\nempirical findings support our hypothesis that LLMs implicitly infer a latent\nvariable containing task information.", + "authors": "Xinyi Wang, Wanrong Zhu, Michael Saxon, Mark Steyvers, William Yang Wang", + "published": "2023-01-27", + "updated": "2024-02-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.08633v2", + "title": "Learning To Retrieve Prompts for In-Context Learning", + "abstract": "In-context learning is a recent paradigm in natural language understanding,\nwhere a large pre-trained language model (LM) observes a test instance and a\nfew training examples as its input, and directly decodes the output without any\nupdate to its parameters. However, performance has been shown to strongly\ndepend on the selected training examples (termed prompt). In this work, we\npropose an efficient method for retrieving prompts for in-context learning\nusing annotated data and a LM. Given an input-output pair, we estimate the\nprobability of the output given the input and a candidate training example as\nthe prompt, and label training examples as positive or negative based on this\nprobability. We then train an efficient dense retriever from this data, which\nis used to retrieve training examples as prompts at test time. We evaluate our\napproach on three sequence-to-sequence tasks where language utterances are\nmapped to meaning representations, and find that it substantially outperforms\nprior work and multiple baselines across the board.", + "authors": "Ohad Rubin, Jonathan Herzig, Jonathan Berant", + "published": "2021-12-16", + "updated": "2022-05-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.05253v2", + "title": "MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning", + "abstract": "Large-scale pretraining is fast becoming the norm in Vision-Language (VL)\nmodeling. However, prevailing VL approaches are limited by the requirement for\nlabeled data and the use of complex multi-step pretraining objectives. We\npresent MAGMA - a simple method for augmenting generative language models with\nadditional modalities using adapter-based finetuning. Building on Frozen, we\ntrain a series of VL models that autoregressively generate text from arbitrary\ncombinations of visual and textual input. The pretraining is entirely\nend-to-end using a single language modeling objective, simplifying optimization\ncompared to previous approaches. Importantly, the language model weights remain\nunchanged during training, allowing for transfer of encyclopedic knowledge and\nin-context learning abilities from language pretraining. MAGMA outperforms\nFrozen on open-ended generative tasks, achieving state of the art results on\nthe OKVQA benchmark and competitive results on a range of other popular VL\nbenchmarks, while pretraining on 0.2% of the number of samples used to train\nSimVLM.", + "authors": "Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, Anette Frank", + "published": "2021-12-09", + "updated": "2022-10-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "I.2.7; I.4.8; I.5.1" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14726v2", + "title": "In-Context Demonstration Selection with Cross Entropy Difference", + "abstract": "Large language models (LLMs) can use in-context demonstrations to improve\nperformance on zero-shot tasks. However, selecting the best in-context examples\nis challenging because model performance can vary widely depending on the\nselected examples. We present a cross-entropy difference (CED) method for\nselecting in-context demonstrations. Our method is based on the observation\nthat the effectiveness of in-context demonstrations negatively correlates with\nthe perplexity of the test example by a language model that was finetuned on\nthat demonstration. We utilize parameter efficient finetuning to train small\nmodels on training data that are used for computing the cross-entropy\ndifference between a test example and every candidate in-context demonstration.\nThis metric is used to rank and select in-context demonstrations independently\nfor each test input. We evaluate our method on a mix-domain dataset that\ncombines 8 benchmarks, representing 4 text generation tasks, showing that CED\nfor in-context demonstration selection can improve performance for a variety of\nLLMs.", + "authors": "Dan Iter, Reid Pryzant, Ruochen Xu, Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu", + "published": "2023-05-24", + "updated": "2023-11-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.02248v1", + "title": "COSMIC: Data Efficient Instruction-tuning For Speech In-Context Learning", + "abstract": "We present a data and cost efficient way of incorporating the speech modality\ninto a large language model (LLM). The resulting multi-modal LLM is a\nCOntextual Speech Model with Instruction-following/in-context-learning\nCapabilities - COSMIC. Speech comprehension test question-answer (SQA) pairs\nare generated using GPT-3.5 based on the speech transcriptions as a part of the\nsupervision for the instruction tuning. With fewer than 20M trainable\nparameters and as little as 450 hours of English speech data for SQA\ngeneration, COSMIC exhibits emergent instruction-following and in-context\nlearning capabilities in speech-to-text tasks. The model is able to follow the\ngiven text instructions to generate text response even on the unseen EN$\\to$X\nspeech-to-text translation (S2TT) task with zero-shot setting. We evaluate the\nmodel's in-context learning via various tasks such as EN$\\to$X S2TT and\nfew-shot domain adaptation. And instruction-following capabilities are\nevaluated through a contextual biasing benchmark. Our results demonstrate the\nefficacy of the proposed low cost recipe for building a speech LLM and that\nwith the new instruction-tuning data.", + "authors": "Jing Pan, Jian Wu, Yashesh Gaur, Sunit Sivasankaran, Zhuo Chen, Shujie Liu, Jinyu Li", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.04486v1", + "title": "Active Example Selection for In-Context Learning", + "abstract": "With a handful of demonstration examples, large-scale language models show\nstrong capability to perform various tasks by in-context learning from these\nexamples, without any fine-tuning. We demonstrate that in-context learning\nperformance can be highly unstable across samples of examples, indicating the\nidiosyncrasies of how language models acquire information. We formulate example\nselection for in-context learning as a sequential decision problem, and propose\na reinforcement learning algorithm for identifying generalizable policies to\nselect demonstration examples. For GPT-2, our learned policies demonstrate\nstrong abilities of generalizing to unseen tasks in training, with a $5.8\\%$\nimprovement on average. Examples selected from our learned policies can even\nachieve a small improvement on GPT-3 Ada. However, the improvement diminishes\non larger GPT-3 models, suggesting emerging capabilities of large language\nmodels.", + "authors": "Yiming Zhang, Shi Feng, Chenhao Tan", + "published": "2022-11-08", + "updated": "2022-11-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.13884v2", + "title": "Multimodal Few-Shot Learning with Frozen Language Models", + "abstract": "When trained at sufficient scale, auto-regressive language models exhibit the\nnotable ability to learn a new language task after being prompted with just a\nfew examples. Here, we present a simple, yet effective, approach for\ntransferring this few-shot learning ability to a multimodal setting (vision and\nlanguage). Using aligned image and caption data, we train a vision encoder to\nrepresent each image as a sequence of continuous embeddings, such that a\npre-trained, frozen language model prompted with this prefix generates the\nappropriate caption. The resulting system is a multimodal few-shot learner,\nwith the surprising ability to learn a variety of new tasks when conditioned on\nexamples, represented as a sequence of multiple interleaved image and text\nembeddings. We demonstrate that it can rapidly learn words for new objects and\nnovel visual categories, do visual question-answering with only a handful of\nexamples, and make use of outside knowledge, by measuring a single model on a\nvariety of established and new benchmarks.", + "authors": "Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill", + "published": "2021-06-25", + "updated": "2021-07-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.04356v1", + "title": "Robust Speech Recognition via Large-Scale Weak Supervision", + "abstract": "We study the capabilities of speech processing systems trained simply to\npredict large amounts of transcripts of audio on the internet. When scaled to\n680,000 hours of multilingual and multitask supervision, the resulting models\ngeneralize well to standard benchmarks and are often competitive with prior\nfully supervised results but in a zero-shot transfer setting without the need\nfor any fine-tuning. When compared to humans, the models approach their\naccuracy and robustness. We are releasing models and inference code to serve as\na foundation for further work on robust speech processing.", + "authors": "Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever", + "published": "2022-12-06", + "updated": "2022-12-06", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.CL", + "cs.LG", + "cs.SD" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.02111v1", + "title": "Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers", + "abstract": "We introduce a language modeling approach for text to speech synthesis (TTS).\nSpecifically, we train a neural codec language model (called Vall-E) using\ndiscrete codes derived from an off-the-shelf neural audio codec model, and\nregard TTS as a conditional language modeling task rather than continuous\nsignal regression as in previous work. During the pre-training stage, we scale\nup the TTS training data to 60K hours of English speech which is hundreds of\ntimes larger than existing systems. Vall-E emerges in-context learning\ncapabilities and can be used to synthesize high-quality personalized speech\nwith only a 3-second enrolled recording of an unseen speaker as an acoustic\nprompt. Experiment results show that Vall-E significantly outperforms the\nstate-of-the-art zero-shot TTS system in terms of speech naturalness and\nspeaker similarity. In addition, we find Vall-E could preserve the speaker's\nemotion and acoustic environment of the acoustic prompt in synthesis. See\nhttps://aka.ms/valle for demos of our work.", + "authors": "Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, Furu Wei", + "published": "2023-01-05", + "updated": "2023-01-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.14198v2", + "title": "Flamingo: a Visual Language Model for Few-Shot Learning", + "abstract": "Building models that can be rapidly adapted to novel tasks using only a\nhandful of annotated examples is an open challenge for multimodal machine\nlearning research. We introduce Flamingo, a family of Visual Language Models\n(VLM) with this ability. We propose key architectural innovations to: (i)\nbridge powerful pretrained vision-only and language-only models, (ii) handle\nsequences of arbitrarily interleaved visual and textual data, and (iii)\nseamlessly ingest images or videos as inputs. Thanks to their flexibility,\nFlamingo models can be trained on large-scale multimodal web corpora containing\narbitrarily interleaved text and images, which is key to endow them with\nin-context few-shot learning capabilities. We perform a thorough evaluation of\nour models, exploring and measuring their ability to rapidly adapt to a variety\nof image and video tasks. These include open-ended tasks such as visual\nquestion-answering, where the model is prompted with a question which it has to\nanswer; captioning tasks, which evaluate the ability to describe a scene or an\nevent; and close-ended tasks such as multiple-choice visual question-answering.\nFor tasks lying anywhere on this spectrum, a single Flamingo model can achieve\na new state of the art with few-shot learning, simply by prompting the model\nwith task-specific examples. On numerous benchmarks, Flamingo outperforms\nmodels fine-tuned on thousands of times more task-specific data.", + "authors": "Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, Karen Simonyan", + "published": "2022-04-29", + "updated": "2022-11-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.10559v3", + "title": "Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers", + "abstract": "Large pretrained language models have shown surprising in-context learning\n(ICL) ability. With a few demonstration input-label pairs, they can predict the\nlabel for an unseen input without parameter updates. Despite the great success\nin performance, its working mechanism still remains an open question. In this\npaper, we explain language models as meta-optimizers and understand in-context\nlearning as implicit finetuning. Theoretically, we figure out that Transformer\nattention has a dual form of gradient descent. On top of it, we understand ICL\nas follows: GPT first produces meta-gradients according to the demonstration\nexamples, and then these meta-gradients are applied to the original GPT to\nbuild an ICL model. We comprehensively compare the behaviors of in-context\nlearning and explicit finetuning on real tasks to provide empirical evidence\nthat supports our understanding. Experimental results show that in-context\nlearning behaves similarly to explicit finetuning from multiple perspectives.\nInspired by the dual form between Transformer attention and gradient descent,\nwe design a momentum-based attention by analogy with gradient descent with\nmomentum. The improved performance over vanilla attention further supports our\nunderstanding from another perspective, and more importantly, shows the\npotential to utilize our understanding for future model design. The code is\navailable at \\url{https://aka.ms/icl}.", + "authors": "Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, Furu Wei", + "published": "2022-12-20", + "updated": "2023-05-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.09881v2", + "title": "In-Context Learning with Iterative Demonstration Selection", + "abstract": "Spurred by advancements in scale, large language models (LLMs) have\ndemonstrated strong few-shot learning ability via in-context learning (ICL).\nHowever, the performance of ICL has been shown to be highly sensitive to the\nselection of few-shot demonstrations. Selecting the most suitable examples as\ncontext remains an ongoing challenge and an open problem. Existing literature\nhas highlighted the importance of selecting examples that are diverse or\nsemantically similar to the test sample while ignoring the fact that the\noptimal selection dimension, i.e., diversity or similarity, is task-specific.\nLeveraging the merits of both dimensions, we propose Iterative Demonstration\nSelection (IDS). Using zero-shot chain-of-thought reasoning (Zero-shot-CoT),\nIDS iteratively selects examples that are diverse but still strongly correlated\nwith the test sample as ICL demonstrations. Specifically, IDS applies\nZero-shot-CoT to the test sample before demonstration selection. The output\nreasoning path is then used to choose demonstrations that are prepended to the\ntest sample for inference. The generated answer is accompanied by its\ncorresponding reasoning path for extracting a new set of demonstrations in the\nnext iteration. After several iterations, IDS adopts majority voting to obtain\nthe final result. Through extensive experiments on tasks including commonsense\nreasoning, question answering, topic classification, and sentiment analysis, we\ndemonstrate that IDS can consistently outperform existing ICL demonstration\nselection methods.", + "authors": "Chengwei Qin, Aston Zhang, Anirudh Dagar, Wenming Ye", + "published": "2023-10-15", + "updated": "2023-10-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.13099v1", + "title": "Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks", + "abstract": "The rapid progress in the field of natural language processing (NLP) systems\nand the expansion of large language models (LLMs) have opened up numerous\nopportunities in the field of education and instructional methods. These\nadvancements offer the potential for tailored learning experiences and\nimmediate feedback, all delivered through accessible and cost-effective\nservices. One notable application area for this technological advancement is in\nthe realm of solving mathematical problems. Mathematical problem-solving not\nonly requires the ability to decipher complex problem statements but also the\nskill to perform precise arithmetic calculations at each step of the\nproblem-solving process. However, the evaluation of the arithmetic capabilities\nof large language models remains an area that has received relatively little\nattention. In response, we introduce an extensive mathematics dataset called\n\"MathQuest\" sourced from the 11th and 12th standard Mathematics NCERT\ntextbooks. This dataset encompasses mathematical challenges of varying\ncomplexity and covers a wide range of mathematical concepts. Utilizing this\ndataset, we conduct fine-tuning experiments with three prominent LLMs: LLaMA-2,\nWizardMath, and MAmmoTH. These fine-tuned models serve as benchmarks for\nevaluating their performance on our dataset. Our experiments reveal that among\nthe three models, MAmmoTH-13B emerges as the most proficient, achieving the\nhighest level of competence in solving the presented mathematical problems.\nConsequently, MAmmoTH-13B establishes itself as a robust and dependable\nbenchmark for addressing NCERT mathematics problems.", + "authors": "Avinash Anand, Mohit Gupta, Kritarth Prasad, Navya Singla, Sanjana Sanjeev, Jatin Kumar, Adarsh Raj Shivam, Rajiv Ratn Shah", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "In this section, we delve into the existing literature, unveiling a diverse array of approaches utilizing Large Language Models (LLMs) for tackling mathematical problems. Recent research has highlighted the potential of Large Language Models (LLMs) in education [2, 1]. They offer promise in automating question generation and supporting direct interactions within the learning environment [18]. Furthermore, investigations have explored few-shot prompting techniques over LLMs for addressing mathematical word problems [35, 42, 11]. The \"chain-ofthought\" prompting approach [35] leverages explicit intermediate reasoning steps to bolster the LLM\u2019s reasoning abilities. To mitigate arithmetic errors commonly observed in LLMs [21, 14], earlier studies [7] have explored the use of external calculators to execute operations generated by LLMs. Problem Problem If the lines $2x + y 3 = 0 , 5x + ky 3 = 0$ and $3x y 2 = 0$ are concurrent, find the value of k. If the lines $2x + y 3 = 0 , 5x + ky 3 = 0$ and $3x y 2 = 0$ are concurrent, find the value of k. Solution Solution For lines to be concurrent, they must intersect at a common point. We begin by determining the intersection point of lines (1) and (3). Using the lines $2x + y 3 = 0$ (referred to as (1)) and $3x y 2 = 0$ (referred to as (3)), and solving them simultaneously, we obtain the coordinates (1, 1) for their intersection. This means that for the lines to be concurrent, the point (1, 1) must also satisfy the second line, $5x + ky 3 = 0$ (referred to as (2)). Substituting x = 1 and y = 1 into this equation, we obtain $5(1) + k(1) 3 = 0$, which yields the result k = -2. For lines to be concurrent, they must intersect at a common point. We begin by determining the intersection point of lines (1) and (3). Using the lines $2x + y 3 = 0$ (referred to as (1)) and $3x y 2 = 0$ (referred to as (3)), and solving them simultaneously, we obtain the coordinates (1, 1) for their intersection. This means that for the lines to be concurrent, the point (1, 1) must also satisfy the second line, $5x + ky 3 = 0$ (referred to as (2)). Substituting x = 1 and y = 1 into this equation, we obtain $5(1) + k(1) 3 = 0$, which yields the result k = -2. Figure 2: Our Dataset MathQuest Sample Furthermore, [36] presents a novel method tailored for addressing elementary arithmetic and logical problems. This method concatenates the generated answer with the original problem statement, tasking the model with predicting the initial conditions to verify the accuracy of the answer. Notably, a subset of these approaches [10, 5] can function effectively with zero-shot prompts, offering a versatile approach to mathematical problem-solving. A specialized method, MathPrompter [15], targets the enhancement of arithmetic operations and reasoning capabilities of LLMs, particularly designed to facilitate mathematical problem-solving tasks. Various approaches exist for enhancing mathematical problem-solving with Large Language Models (LLMs). Wang et al.\u2019s self-consistency [34], built on the CoT framework, assesses multiple potential reasoning paths and selects answers via majority vote. [22] extend self-consistency by teaching a verifier to validate each step, while [24] use recent LLMs like GPT-3.5 to generate an output, provide feedback, and prompt the model for improvements. [33] evaluate pretrained language models on basic arithmetic expressions, including addition (+) and subtraction (\u2212), and [28] expand the assessment to include multiplication (\u2217) operations within the language models\u2019 scope.", + "pre_questions": [], + "main_content": "Introduction Mathematical problem-solving represents a multifaceted cognitive skill, encompassing the comprehension of problem statements, identification of pertinent concepts and formulas, application of suitable strategies and algorithms, precise calculations, and the verification of solution validity and reasonableness. Traditionally, mathematical problem-solving has been imparted and assessed through conventional means such as textbooks, worksheets, and examinations, often affording limited feedback and learner guidance. Furthermore, these methods may not fully capture the diversity and intricacy of real-world mathematical challenges encountered by students. In the era of rapid advancements in artificial intelligence and natural language processing (NLP), large language models (LLMs) have emerged as formidable tools for generating natural language text across a spectrum of domains and tasks [12]. LLMs, grounded in the transformer architecture [32], have the capacity to glean long-range dependencies and contextual representations from vast corpora of text data. These LLMs have showcased impressive proficiency in mathematical reasoning and problem-solving by leveraging their inherent understanding of arithmetic operations, algebraic principles, and symbolic manipulation. Nevertheless, existing LLMs grapple with substantial hurdles in tackling math word problems, particularly those necessitating intricate reasoning, multi-step arithmetic calculations, or domain-specific knowledge [13, 20, 37]. Math-401 + Augmentation WizardMath Llama2 MAmmoTH Supervised Fine-Tuning I n f e r e n c e GSM-8K DeepMind MathQuest NumGLUE Simleq Inference Results Testing the fine-tuned models on these datasets to compare the inference results. Figure 1: This figure shows the fine-tuning flow, the LLMs we use for fine-tuning, and the datasets we use for inference. The advent of large language models (LLMs) has proven to be a boon in the field of education, as evidenced by recent studies [25, 29, 39]. These versatile models have ushered in a new era of learning possibilities, catering to individual student needs by considering their preferences, objectives, interests, and aptitudes. For instance, LLMs offer a tailored learning experience, providing personalized feedback, guidance, explanations, and recommendations [16]. Educators, too, find these models invaluable, as they simplify the creation of engaging learning materials such as quizzes, summaries, questions, and exercises [27]. Notably, LLMs can even generate multiple-choice questions based on provided text passages. Additionally, these models excel in enhancing language proficiency, aiding learners in vocabulary, grammar, pronunciation, and fluency [16]. Their versatility extends to assisting students and researchers in exploring new topics and extracting information from diverse sources. They effortlessly generate summaries [38], identify keywords, generate citations [17, 3, 4], and provide relevant links in response to queries. This paper endeavors to tackle the challenges posed by mathematical problem-solving within the context of LLMs. To this end, we introduce MathQuest, a comprehensive mathematics dataset meticulously curated from the 11th and 12th standard Mathematics NCERT textbooks1. This dataset spans various levels of mathematical complexity and encompasses a wide array of mathematical concepts. We introduce this dataset because existing open-source datasets primarily consist of relatively straightforward mathematical problems. In contrast, standard mathematical problems can be significantly more complex. To equip Large Language Models (LLMs) with the ability to solve these intricate problems, we conduct fine-tuning on this dataset. Furthermore, we propose a novel approach for fine-tuning three preeminent LLMs: MAmmoTH [41], LLaMA-2 [31], and WizardMath [23] 1https://ncert.nic.in/ 2 using our MathQuest dataset. Our evaluation encompasses not only the performance of these finetuned models on our dataset but also their proficiency on other openly accessible mathematical reasoning datasets. Our findings indicate that MAmmoTH-13B outshines its counterparts, emerging as the most adept and proficient in solving the mathematical challenges presented. Thus, MAmmoTH13B establishes itself as a dependable and robust baseline for addressing NCERT mathematics problems. For our research experiments, we employed the Math-401 dataset [40], which encompasses 401 samples of mathematical problems. This dataset encompasses a diverse range of mathematical operations, including addition (+), subtraction (\u2212), multiplication (\u2217), division (/), exponentiation, trigonometric functions (sin, cos, tan), logarithmic functions (log, ln), and incorporates integers, 3 decimals, and irrational numbers (\u03c0, e). Recognizing the limited sample size of this dataset for effective learning by large language models, we expanded it through augmentation, resulting in a dataset size of 302, 000 samples. To construct our augmented dataset, we employed the SymPy Python library. This library allowed us to generate arithmetic mathematical equations along with their corresponding ground truth values. These equations covered basic arithmetic operators such as addition (+), subtraction (-), multiplication (*), and division (/). Furthermore, the dataset includes extensive arithmetic expressions with brackets, mimicking the complexity often encountered in real-world math word problems. Table 1 provides a comprehensive breakdown of the question types utilized in the creation of our augmented dataset. Furthermore, we evaluated our model\u2019s performance on four additional datasets: GSM-8K [8], DeepMind [30], NumGLUE [26], and SimulEq [19]. Type Range Decimal Places (1 4) Variables Count Small Integer [-20, 20] \u00d7 (x, y) 65,000 Small Decimal [-20, 20] \u2713 (x, y) 35,000 Small Decimal + Integer [-20, 20] \u2713 (x, y) 39,000 Large Integer [-1000, 1000] \u00d7 (x, y) 39,000 Large Decimal [-1000, 1000] \u2713 (x, y) 25,000 Large Decimal + Integer [-1000, 1000] \u2713 (x, y) 25,000 3 Terms [-100, 100] \u2713 (x, y, z) 25,000 4 Terms [-100, 100] \u2713 (w, x, y, z) 49,000 Total 302,000 Table 1: The distribution of types of question in our augmented Math-401 dataset 3.1 Our Dataset: MathQuest We have meticulously curated our own dataset, referred to as MathQuest, sourcing problems from high school mathematics NCERT books. MathQuest is a rich resource, encompassing word problems of varying complexities and spanning diverse mathematical concepts. Our dataset comprises a total of 14 overarching mathematical domains, including sets, trigonometry, binomial theorem, and more. The distribution of samples across these concepts is visually represented in Figure.3. Our dataset contains total of 223 samples. Notably, as depicted in the charts, the category of \"Sequence and Series\" boasts the highest number of problems within our dataset. To provide a glimpse of our dataset\u2019s structure, we present a sample from MathQuest in Figure.2. 4 Methodology This research aims to enhance the mathematical problem-solving capabilities of large language models. Initially, we observed that existing open-source models such as LLaMA-2 [31] and Vicuna [6] struggled with elementary mathematical tasks like simple addition and subtraction. This observation served as the catalyst for our research, motivating us to improve LLMs\u2019 proficiency in comprehending and accurately solving mathematical problems. To achieve this, we adopted a instructive approach reminiscent of teaching mathematics to students. We commenced by imparting a clear understanding of fundamental operators such as +, \u2212, \u2217, /, gradually progressing to more advanced operators and expressions. Similarly, we endeavored to acquaint LLMs with the meanings of mathematical operators and expressions. To facilitate this process, we leveraged the Math-401 dataset [40], a valuable resource comprising 401 data samples consisting of basic mathematical questions and their corresponding answers. Given the dataset\u2019s limited size, we augmented it to introduce greater diversity and complexity, ensuring that the model could grasp and master advanced mathematical concepts during training. For the fine-tuning process, we employed three prominent large language models: LLaMA-2 [31], WizardMath [23], and MAmmoTH [41]. LLaMA-2 [31] represents an upgraded version of LLaMA, refined through training on an enriched mixture of publicly available data. The enhancements 4 Count Binomial Theorem Complex Numbers And Quadratic Equations Conic Sections Intro to 3-D Geometry Limits And Derivatives Linear Inequalities Permutation Combination Probability Relations And Functions Sequence And Series Statistics Straight Lines Trigonometric Functions Sets 40 30 20 10 0 Figure 3: Distribution of Count of Samples of each Concept encompass a 40% increase in the pre-training corpus size, a doubling of the model\u2019s context length, and the incorporation of grouped-query attention. WizardMath [23] introduces an innovative approach known as Reinforcement Learning from EvolInstruct Feedback (RLEIF). This method combines Evol-Instruct and reinforced process supervision techniques to evolve GSM8k and MATH datasets. Subsequently, it fine-tunes the pre-trained LLaMA2 model using the evolved data and reward models, resulting in the development of the WizardMath model. Lastly, the MAmmoTH [41] models are trained using the MathInstruct dataset, meticulously curated for instructional tuning. MathInstruct is constructed from a compilation of 13 mathematical datasets, including six newly curated rationales. It encompasses a hybrid of chain-of-thought (CoT) and program-of-thought (PoT) rationales, ensuring comprehensive coverage of diverse mathematical domains. The entire fine-tuning process is outlined in Figure. 1. Model # of Params LLaMA-2 7B LLaMA-2 13B WizardMath 7B WizardMath 13B MAmmoTH 7B MAmmoTH 13B Accuracy GSM-8K DeepMind NumGLUE SimulEq Math-401* MathQuest 16.0 46.0 37.0 11.0 10.0 10.4 22.0 50.0 42.0 15.0 10.0 14.1 61.0 51.0 54.0 27.0 6.0 14.6 65.0 55.0 70.0 36.0 8.0 14.3 43.0 49.0 54.0 23.0 11.0 12.2 44.0 48.0 56.0 26.0 14.0 18.1 Table 2: Exact Match Accuracy results on the set of 100 samples of 5 datasets and our dataset MathQuest Before fine-tuning on Math-401 dataset. (*) refers to the set of Math-401 we augmented for fine-tuning. 5 5 Experiments In this section, we delve into the details of our conducted experiments, outlining the experimental setup and the utilized hyper-parameters. Our research objective revolves around the creation of a high school-level mathematical dataset, encompassing questions of varying complexities and diverse concepts, followed by the establishment of robust baselines for solving mathematical problems. To achieve this, we conducted experiments involving three prominent large language models: LLaMA2 [31], WizardMath [41]. We performed these experiments on both the 7B and 13B variants of these large language models (LLMs). Our experiments were executed in two stages. In the first stage, we directly loaded the original model weights and carried out inference on our designated test set. In the second stage, we undertook the fine-tuning of these models using the Math-401 [40] dataset as a crucial step in the process. The Math-401 [40] dataset initially comprised 401 elementary mathematical equations paired with their corresponding results. To enhance its comprehensiveness and diversity, we performed data augmentation by introducing more intricate equations involving operators such as addition (+), subtraction (\u2212), multiplication (\u2217), division (/), as well as parentheses (()). This augmentation process aimed to create a more generalized and versatile dataset. Subsequently, we proceeded to fine-tune the Large Language Models (LLMs) using this augmented Math-401 [40] dataset. Model # of Params LLaMA-2 7B LLaMA-2 13B WizardMath 7B WizardMath 13B MAmmoTH 7B MAmmoTH 13B Accuracy GSM-8K DeepMind NumGLUE SimulEq Math-401* MathQuest 30.0 46.0 45.0 15.0 17.0 10.6 42.0 51.0 54.0 16.0 24.0 20.3 64.0 55.0 52.0 29.0 15.0 16.01 68.0 56.0 70.0 38.0 10.0 20.1 56.0 50.0 62.0 24.0 16.0 18.5 67.0 51.0 64.0 34.0 18.0 24.0 Table 3: Exact Match Accuracy Results on the set of 100 samples of 5 datasets and our dataset MathQuest After fine-tuning on Math-401 dataset. (*) refers to the set of Math-401 we augmented for fine-tuning. The dataset was split into training (241,600 samples), validation (30,200 samples), and test (30,200 samples) subsets. We used the AdamW optimizer, a well-recognized technique, to enhance model performance. This optimization step was crucial for achieving the results in our study. For fine-tuning, we employed QLora [9], an efficient approach that maximizes memory efficiency and minimize computation cost using 4-bit quantization in a pretrained language model, resulting in Low Rank Adapters (LoRA). Each model underwent 10 epochs of fine-tuning with a learning rate of 3 \u00d7 10\u22124. Post fine-tuning, we assessed the models using the same test set employed for pre-fine-tuning inference. The results, summarized in Table. 3, serve to highlight the enhancements achieved in mathematical problem-solving capabilities before and after fine-tuning. 5.1 Evaluation Metric We compared all model variants to evaluate the quality of the generated solutions. To measure performance, we assessed the accuracy in matching the generated answers to the actual solutions for five open-source datasets: GSM-8K, DeepMind, SimulEq, NumGLUE, and Math-401. These datasets provide ground truth answers for exact match accuracy calculation. 6 Results & Discussion In this section, we present the outcomes of our experiments in the domain of mathematical problemsolving. Our study encompasses evaluations conducted on our proprietary dataset, MathQuest, as well as five other publicly available datasets. This paper establishes baseline performance metrics for 6 the task using our MathQuest dataset. To gauge the effectiveness of Large Language Models (LLMs) across diverse datasets, we utilize exact match accuracy as a benchmark metric. We organize our results into two distinct setups: before fine-tuning and after fine-tuning the models, with the primary aim of evaluating the model\u2019s learning capabilities. Table. 2 presents the exact match accuracy of three models across two variants, 7B and 13B, before fine-tuning, on five datasets and our dataset MathQuest. To summarize these findings, referring to Table. 2, the performance of all the models is notably lower on the SimulEq dataset, as well as on our augmented dataset, Math-401. This discrepancy can be attributed to the presence of intricate problems within these datasets, which often require additional knowledge, such as questions like \"Number of red color cards in a deck of 52 cards.\" Consequently, Table.3 provides a detailed overview of the accuracy results following the fine-tuning process. In summary, the accuracy of all models showed significant improvement after undergoing fine-tuning on our diverse and complex question-answer dataset. Notably, models with 13B parameters exhibited higher accuracy compared to those with 7B parameters. The key takeaways from Table. 2, and Table. 3 reveal that the best-performing model is MAmmoTH13B for our dataset MathQuest, exhibiting the highest accuracy among all models after fine-tuning, at 24.0%. Additionally, it\u2019s noteworthy that both MAmmoTH 7B and 13B generated outputs with precision up to two decimal places, indicating their accuracy. From Table 3, It is evident that our dataset, MathQuest, poses a greater challenge due to its complexity and diversity, resulting in lower accuracy compared to other datasets. 7 Conclusion In summary, our approach enhances Large Language Models (LLMs) in acquiring vital reasoning skills for precise mathematical problem-solving. We introduce tailored question-answer pairs in our MathQuest dataset, encompassing single or multiple mathematical operators and expressions. These supportive simple and complex problems guide the model toward incremental problem-solving. Our primary aim is to provide illustrative examples that improve solution accuracy and clarity. Our results demonstrate significant enhancements in both solution precision and comprehensibility, promising valuable support for educators and students seeking effective mathematical problemsolving capabilities. While our research establishes a robust foundation for advancing mathematical problem-solving with Generative LLMs, further refinements and optimizations are essential to extend its applicability across a broader range of scenarios. Ultimately, our work contributes to advancing conceptual understanding and numerical problem-solving in high school-level mathematical question-answering, offering valuable assistance to students and professionals grappling with complex questions through LLMs. 8 Limitations While our proposed solution can successfully solve basic mathematical problems, it occasionally encounters challenges when dealing with complex mathematical problems that involve retaining variable values for use in subsequent equations. Another limitation of our proposed work is the partial enhancement of reasoning abilities in LLMs for solving mathematical problems. However, it still falls short in dealing with complex expressions that include nested brackets within equations. The reason could be limited training dataset size, we will try to increase our training data in future research. We intend to address this limitation in our future work, wherein we plan to incorporate recent prompting techniques and further enhance LLMs reasoning abilities for these types of problems. 9 Acknowledgement Dr. Rajiv Ratn Shah is partly supported by the Infosys Center for AI, the Center of Design and New Media, and the Center of Excellence in Healthcare at Indraprastha Institute of Information Technology, Delhi. We gratefully thank Dr. Astha Verma and Mr. Naman Lal for their guidance and continuous support during our research. Their knowledge and insightful feedback significantly influenced 7 the direction and quality of our research. We appreciate their time, devotion, and willingness to share information, which all contributed considerably to the accomplishment of this job. Their encouragement and constructive talks were a continual source of motivation for us, and we consider ourselves fortunate to have benefited from their wisdom and leadership." + }, + { + "url": "http://arxiv.org/abs/2303.05398v1", + "title": "MathPrompter: Mathematical Reasoning using Large Language Models", + "abstract": "Large Language Models (LLMs) have limited performance when solving arithmetic\nreasoning tasks and often provide incorrect answers. Unlike natural language\nunderstanding, math problems typically have a single correct answer, making the\ntask of generating accurate solutions more challenging for LLMs. To the best of\nour knowledge, we are not aware of any LLMs that indicate their level of\nconfidence in their responses which fuels a trust deficit in these models\nimpeding their adoption. To address this deficiency, we propose `MathPrompter',\na technique that improves performance of LLMs on arithmetic problems along with\nincreased reliance in the predictions. MathPrompter uses the Zero-shot\nchain-of-thought prompting technique to generate multiple Algebraic expressions\nor Python functions to solve the same math problem in different ways and\nthereby raise the confidence level in the output results. This is in contrast\nto other prompt based CoT methods, where there is no check on the validity of\nthe intermediate steps followed. Our technique improves over state-of-the-art\non the MultiArith dataset ($78.7\\%\\rightarrow92.5\\%$) evaluated using 175B\nparameter GPT-based LLM.", + "authors": "Shima Imani, Liang Du, Harsh Shrivastava", + "published": "2023-03-04", + "updated": "2023-03-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.11171v4", + "title": "Self-Consistency Improves Chain of Thought Reasoning in Language Models", + "abstract": "Chain-of-thought prompting combined with pre-trained large language models\nhas achieved encouraging results on complex reasoning tasks. In this paper, we\npropose a new decoding strategy, self-consistency, to replace the naive greedy\ndecoding used in chain-of-thought prompting. It first samples a diverse set of\nreasoning paths instead of only taking the greedy one, and then selects the\nmost consistent answer by marginalizing out the sampled reasoning paths.\nSelf-consistency leverages the intuition that a complex reasoning problem\ntypically admits multiple different ways of thinking leading to its unique\ncorrect answer. Our extensive empirical evaluation shows that self-consistency\nboosts the performance of chain-of-thought prompting with a striking margin on\na range of popular arithmetic and commonsense reasoning benchmarks, including\nGSM8K (+17.9%), SVAMP (+11.0%), AQuA (+12.2%), StrategyQA (+6.4%) and\nARC-challenge (+3.9%).", + "authors": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou", + "published": "2022-03-21", + "updated": "2023-03-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.15594v4", + "title": "A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level", + "abstract": "We demonstrate that a neural network pre-trained on text and fine-tuned on\ncode solves mathematics course problems, explains solutions, and generates new\nquestions at a human level. We automatically synthesize programs using few-shot\nlearning and OpenAI's Codex transformer and execute them to solve course\nproblems at 81% automatic accuracy. We curate a new dataset of questions from\nMIT's largest mathematics courses (Single Variable and Multivariable Calculus,\nDifferential Equations, Introduction to Probability and Statistics, Linear\nAlgebra, and Mathematics for Computer Science) and Columbia University's\nComputational Linear Algebra. We solve questions from a MATH dataset (on\nPrealgebra, Algebra, Counting and Probability, Intermediate Algebra, Number\nTheory, and Precalculus), the latest benchmark of advanced mathematics problems\ndesigned to assess mathematical reasoning. We randomly sample questions and\ngenerate solutions with multiple modalities, including numbers, equations, and\nplots. The latest GPT-3 language model pre-trained on text automatically solves\nonly 18.8% of these university questions using zero-shot learning and 30.8%\nusing few-shot learning and the most recent chain of thought prompting. In\ncontrast, program synthesis with few-shot learning using Codex fine-tuned on\ncode generates programs that automatically solve 81% of these questions. Our\napproach improves the previous state-of-the-art automatic solution accuracy on\nthe benchmark topics from 8.8% to 81.1%. We perform a survey to evaluate the\nquality and difficulty of generated questions. This work is the first to\nautomatically solve university-level mathematics course questions at a human\nlevel and the first work to explain and generate university-level mathematics\ncourse questions at scale, a milestone for higher education.", + "authors": "Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, Roman Wang, Nikhil Singh, Taylor L. Patti, Jayson Lynch, Avi Shporer, Nakul Verma, Eugene Wu, Gilbert Strang", + "published": "2021-12-31", + "updated": "2022-05-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10435v2", + "title": "PAL: Program-aided Language Models", + "abstract": "Large language models (LLMs) have recently demonstrated an impressive ability\nto perform arithmetic and symbolic reasoning tasks, when provided with a few\nexamples at test time (\"few-shot prompting\"). Much of this success can be\nattributed to prompting methods such as \"chain-of-thought'', which employ LLMs\nfor both understanding the problem description by decomposing it into steps, as\nwell as solving each step of the problem. While LLMs seem to be adept at this\nsort of step-by-step decomposition, LLMs often make logical and arithmetic\nmistakes in the solution part, even when the problem is decomposed correctly.\nIn this paper, we present Program-Aided Language models (PAL): a novel approach\nthat uses the LLM to read natural language problems and generate programs as\nthe intermediate reasoning steps, but offloads the solution step to a runtime\nsuch as a Python interpreter. With PAL, decomposing the natural language\nproblem into runnable steps remains the only learning task for the LLM, while\nsolving is delegated to the interpreter. We demonstrate this synergy between a\nneural LLM and a symbolic interpreter across 13 mathematical, symbolic, and\nalgorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all\nthese natural language reasoning tasks, generating code using an LLM and\nreasoning using a Python interpreter leads to more accurate results than much\nlarger models. For example, PAL using Codex achieves state-of-the-art few-shot\naccuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B\nwhich uses chain-of-thought by absolute 15% top-1. Our code and data are\npublicly available at http://reasonwithpal.com/ .", + "authors": "Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig", + "published": "2022-11-18", + "updated": "2023-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.03874v2", + "title": "Measuring Mathematical Problem Solving With the MATH Dataset", + "abstract": "Many intellectual endeavors require mathematical problem solving, but this\nskill remains beyond the capabilities of computers. To measure this ability in\nmachine learning models, we introduce MATH, a new dataset of 12,500 challenging\ncompetition mathematics problems. Each problem in MATH has a full step-by-step\nsolution which can be used to teach models to generate answer derivations and\nexplanations. To facilitate future research and increase accuracy on MATH, we\nalso contribute a large auxiliary pretraining dataset which helps teach models\nthe fundamentals of mathematics. Even though we are able to increase accuracy\non MATH, our results show that accuracy remains relatively low, even with\nenormous Transformer models. Moreover, we find that simply increasing budgets\nand model parameter counts will be impractical for achieving strong\nmathematical reasoning if scaling trends continue. While scaling Transformers\nis automatically solving most other text-based tasks, scaling is not currently\nsolving MATH. To have more traction on mathematical problem solving we will\nlikely need new algorithmic advancements from the broader research community.", + "authors": "Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt", + "published": "2021-03-05", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.02311v5", + "title": "PaLM: Scaling Language Modeling with Pathways", + "abstract": "Large language models have been shown to achieve remarkable performance\nacross a variety of natural language tasks using few-shot learning, which\ndrastically reduces the number of task-specific training examples needed to\nadapt the model to a particular application. To further our understanding of\nthe impact of scale on few-shot learning, we trained a 540-billion parameter,\ndensely activated, Transformer language model, which we call Pathways Language\nModel PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML\nsystem which enables highly efficient training across multiple TPU Pods. We\ndemonstrate continued benefits of scaling by achieving state-of-the-art\nfew-shot learning results on hundreds of language understanding and generation\nbenchmarks. On a number of these tasks, PaLM 540B achieves breakthrough\nperformance, outperforming the finetuned state-of-the-art on a suite of\nmulti-step reasoning tasks, and outperforming average human performance on the\nrecently released BIG-bench benchmark. A significant number of BIG-bench tasks\nshowed discontinuous improvements from model scale, meaning that performance\nsteeply increased as we scaled to our largest model. PaLM also has strong\ncapabilities in multilingual tasks and source code generation, which we\ndemonstrate on a wide array of benchmarks. We additionally provide a\ncomprehensive analysis on bias and toxicity, and study the extent of training\ndata memorization with respect to model scale. Finally, we discuss the ethical\nconsiderations related to large language models and discuss potential\nmitigation strategies.", + "authors": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel", + "published": "2022-04-05", + "updated": "2022-10-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.10977v1", + "title": "Evaluating Transformer Language Models on Arithmetic Operations Using Number Decomposition", + "abstract": "In recent years, Large Language Models such as GPT-3 showed remarkable\ncapabilities in performing NLP tasks in the zero and few shot settings. On the\nother hand, the experiments highlighted the difficulty of GPT-3 in carrying out\ntasks that require a certain degree of reasoning, such as arithmetic\noperations. In this paper we evaluate the ability of Transformer Language\nModels to perform arithmetic operations following a pipeline that, before\nperforming computations, decomposes numbers in units, tens, and so on. We\ndenote the models fine-tuned with this pipeline with the name Calculon and we\ntest them in the task of performing additions, subtractions and multiplications\non the same test sets of GPT-3. Results show an increase of accuracy of 63% in\nthe five-digit addition task. Moreover, we demonstrate the importance of the\ndecomposition pipeline introduced, since fine-tuning the same Language Model\nwithout decomposing numbers results in 0% accuracy in the five-digit addition\ntask.", + "authors": "Matteo Muffo, Aldo Cocco, Enrico Bertino", + "published": "2023-04-21", + "updated": "2023-04-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.17651v2", + "title": "Self-Refine: Iterative Refinement with Self-Feedback", + "abstract": "Like humans, large language models (LLMs) do not always generate the best\noutput on their first try. Motivated by how humans refine their written text,\nwe introduce Self-Refine, an approach for improving initial outputs from LLMs\nthrough iterative feedback and refinement. The main idea is to generate an\ninitial output using an LLMs; then, the same LLMs provides feedback for its\noutput and uses it to refine itself, iteratively. Self-Refine does not require\nany supervised training data, additional training, or reinforcement learning,\nand instead uses a single LLM as the generator, refiner, and feedback provider.\nWe evaluate Self-Refine across 7 diverse tasks, ranging from dialog response\ngeneration to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT,\nand GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine\nare preferred by humans and automatic metrics over those generated with the\nsame LLM using conventional one-step generation, improving by ~20% absolute on\naverage in task performance. Our work demonstrates that even state-of-the-art\nLLMs like GPT-4 can be further improved at test time using our simple,\nstandalone approach.", + "authors": "Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark", + "published": "2023-03-30", + "updated": "2023-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.09561v5", + "title": "Large Language Models are Better Reasoners with Self-Verification", + "abstract": "Recently, with the chain of thought (CoT) prompting, large language models\n(LLMs), e.g., GPT-3, have shown strong reasoning ability in several natural\nlanguage processing tasks such as arithmetic, commonsense, and logical\nreasoning. However, LLMs with CoT require multi-step prompting and multi-token\nprediction, which is highly sensitive to individual mistakes and vulnerable to\nerror accumulation. The above issues make the LLMs need the ability to verify\nthe answers. In fact, after inferring conclusions in some thinking decision\ntasks, people often check them by re-verifying steps to avoid some mistakes. In\nthis paper, we propose and prove that LLMs also have similar self-verification\nabilities. We take the conclusion obtained by CoT as one of the conditions for\nsolving the original problem. By performing a backward verification of the\nanswers that LLM deduced for itself, we can obtain interpretable answer\nvalidation scores to select the candidate answer with the highest score.\nExperimental results demonstrate that the proposed method can improve the\nreasoning performance on various arithmetic, commonsense, and logical reasoning\ndatasets. Our code is publicly available at:\nhttps://github.com/WENGSYX/Self-Verification.", + "authors": "Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun, Kang Liu, Jun Zhao", + "published": "2022-12-19", + "updated": "2023-10-19", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.02336v3", + "title": "Making Large Language Models Better Reasoners with Step-Aware Verifier", + "abstract": "Few-shot learning is a challenging task that requires language models to\ngeneralize from limited examples. Large language models like GPT-3 and PaLM\nhave made impressive progress in this area, but they still face difficulties in\nreasoning tasks such as GSM8K, a benchmark for arithmetic problems. To improve\ntheir reasoning skills, previous work has proposed to guide the language model\nwith prompts that elicit a series of reasoning steps before giving the final\nanswer, achieving a significant improvement on GSM8K from 17.9% to 58.1% in\nproblem-solving rate. In this paper, we present DIVERSE (Diverse Verifier on\nReasoning Step), a novel approach that further enhances the reasoning\ncapability of language models. DIVERSE has three main components: first, it\ngenerates diverse prompts to explore different reasoning paths for the same\nquestion; second, it uses a verifier to filter out incorrect answers based on a\nweighted voting scheme; and third, it verifies each reasoning step individually\ninstead of the whole chain. We evaluate DIVERSE on the latest language model\ncode-davinci-002 and show that it achieves new state-of-the-art results on six\nof eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%).", + "authors": "Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen", + "published": "2022-06-06", + "updated": "2023-05-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.14858v2", + "title": "Solving Quantitative Reasoning Problems with Language Models", + "abstract": "Language models have achieved remarkable performance on a wide range of tasks\nthat require natural language understanding. Nevertheless, state-of-the-art\nmodels have generally struggled with tasks that require quantitative reasoning,\nsuch as solving mathematics, science, and engineering problems at the college\nlevel. To help close this gap, we introduce Minerva, a large language model\npretrained on general natural language data and further trained on technical\ncontent. The model achieves state-of-the-art performance on technical\nbenchmarks without the use of external tools. We also evaluate our model on\nover two hundred undergraduate-level problems in physics, biology, chemistry,\neconomics, and other sciences that require quantitative reasoning, and find\nthat the model can correctly answer nearly a third of them.", + "authors": "Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra", + "published": "2022-06-29", + "updated": "2022-07-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.12588v4", + "title": "Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks", + "abstract": "Recently, there has been significant progress in teaching language models to\nperform step-by-step reasoning to solve complex numerical reasoning tasks.\nChain-of-thoughts prompting (CoT) is by far the state-of-art method for these\ntasks. CoT uses language models to perform both reasoning and computation in\nthe multi-step `thought' process. To disentangle computation from reasoning, we\npropose `Program of Thoughts' (PoT), which uses language models (mainly Codex)\nto express the reasoning process as a program. The computation is relegated to\nan external computer, which executes the generated programs to derive the\nanswer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP,\nTabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA)\nfor both few-shot and zero-shot setups. Under both few-shot and zero-shot\nsettings, PoT can show an average performance gain over CoT by around 12\\%\nacross all the evaluated datasets. By combining PoT with self-consistency\ndecoding, we can achieve SoTA performance on all math problem datasets and\nnear-SoTA performance on financial datasets. All of our data and code are\nreleased in Github https://github.com/wenhuchen/Program-of-Thoughts", + "authors": "Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen", + "published": "2022-11-22", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.06743v2", + "title": "Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning", + "abstract": "To quantitatively and intuitively explore the generalization ability of\npre-trained language models (PLMs), we have designed several tasks of\narithmetic and logical reasoning. We both analyse how well PLMs generalize when\nthe test data is in the same distribution as the train data and when it is\ndifferent, for the latter analysis, we have also designed a cross-distribution\ntest set other than the in-distribution test set. We conduct experiments on one\nof the most advanced and publicly released generative PLM - BART. Our research\nfinds that the PLMs can easily generalize when the distribution is the same,\nhowever, it is still difficult for them to generalize out of the distribution.", + "authors": "Cunxiang Wang, Boyuan Zheng, Yuchen Niu, Yue Zhang", + "published": "2021-08-15", + "updated": "2021-10-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.10625v3", + "title": "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models", + "abstract": "Chain-of-thought prompting has demonstrated remarkable performance on various\nnatural language reasoning tasks. However, it tends to perform poorly on tasks\nwhich requires solving problems harder than the exemplars shown in the prompts.\nTo overcome this challenge of easy-to-hard generalization, we propose a novel\nprompting strategy, least-to-most prompting. The key idea in this strategy is\nto break down a complex problem into a series of simpler subproblems and then\nsolve them in sequence. Solving each subproblem is facilitated by the answers\nto previously solved subproblems. Our experimental results on tasks related to\nsymbolic manipulation, compositional generalization, and math reasoning reveal\nthat least-to-most prompting is capable of generalizing to more difficult\nproblems than those seen in the prompts. A notable finding is that when the\nGPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve\nthe compositional generalization benchmark SCAN in any split (including length\nsplit) with an accuracy of at least 99% using just 14 exemplars, compared to\nonly 16% accuracy with chain-of-thought prompting. This is particularly\nnoteworthy because neural-symbolic models in the literature that specialize in\nsolving SCAN are trained on the entire training set containing over 15,000\nexamples. We have included prompts for all the tasks in the Appendix.", + "authors": "Denny Zhou, Nathanael Sch\u00e4rli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi", + "published": "2022-05-21", + "updated": "2023-04-16", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.15156v1", + "title": "Regressive Side Effects of Training Language Models to Mimic Student Misconceptions", + "abstract": "This paper presents a novel exploration into the regressive side effects of\ntraining Large Language Models (LLMs) to mimic student misconceptions for\npersonalized education. We highlight the problem that as LLMs are trained to\nmore accurately mimic student misconceptions, there is a compromise in the\nfactual integrity and reasoning ability of the models. Our work involved\ntraining an LLM on a student-tutor dialogue dataset to predict student\nresponses. The results demonstrated a decrease in the model's performance\nacross multiple benchmark datasets, including the ARC reasoning challenge and\nTruthfulQA, which evaluates the truthfulness of model's generated responses.\nFurthermore, the HaluEval Dial dataset, used for hallucination detection, and\nMemoTrap, a memory-based task dataset, also reported a decline in the model\naccuracy. To combat these side effects, we introduced a \"hallucination token\"\ntechnique. This token, appended at the beginning of each student response\nduring training, instructs the model to switch between mimicking student\nmisconceptions and providing factually accurate responses. Despite the\nsignificant improvement across all datasets, the technique does not completely\nrestore the LLM's baseline performance, indicating the need for further\nresearch in this area. This paper contributes to the ongoing discussion on the\nuse of LLMs for student modeling, emphasizing the need for a balance between\npersonalized education and factual accuracy.", + "authors": "Shashank Sonkar, Naiming Liu, Richard G. Baraniuk", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "The intersection of artificial intelligence and education has been an area of active research, with a focus on developing systems that can adapt to and support individual learners. Our work touches upon several research domains, including student modeling, the design of intelligent tutoring systems, and the deployment of Large Language Models (LLMs) in educational contexts. 2.1 Student Modeling Student modeling has long been the cornerstone of personalized learning, with early attempts using rule-based and Bayesian systems to predict student knowledge and behaviors [21]. Recent advancements have shifted towards utilizing machine learning to create more sophisticated models that can adapt to student learning patterns over time [22, 7]. Our work builds upon these foundations by exploring how LLMs can simulate not only the knowledge but also the typical errors and misconceptions students have during the learning process. 2.2 Intelligent Tutoring Systems (ITS) Intelligent tutoring systems have been designed to provide immediate and personalized instruction or feedback to learners without human intervention [23]. The application of LLMs in ITS presents a novel opportunity to create systems that can engage in more natural and meaningful dialogues with students [3, 12]. Our approach diverges from traditional ITS by focusing on the intentional generation of errors to mimic a student\u2019s learning trajectory, rather than solely providing expert-level instructions [24]. 2.3 Large Language Models in Education The use of LLMs like GPT [5] in education is a relatively new but rapidly growing field of study [25]. These models have been employed for various educational purposes, from generating educational content to serving as conversational agents [26, 12]. However, the challenge of ensuring the truthfulness and reliability of the information provided by LLMs is a recurring concern [27]. Our research contributes to this dialogue by investigating the impact of training LLMs to produce student-like errors and proposing a novel \u2018hallucination token\u2019 to manage this trade-off. 2.4 Truthfulness and Reliability in AI The TruthfulQA benchmark has been instrumental in highlighting the issues of truthfulness in AI-generated content [28]. The ARC challenge further emphasizes the complexity of reasoning required from AI systems beyond simple fact retrieval [29]. Our work is aligned with these challenges, as we seek to understand and improve the truthfulness and reasoning capacity of LLMs when they are trained to replicate student behaviors. 3 In conclusion, our study intersects with and contributes to the existing body of work in these areas by addressing the unique challenge of training LLMs to authentically mimic student learning processes, including the generation of errors. Our introduction of the \u201challucination token\u201d represents a step forward in this domain, suggesting a new direction for future research and development.", + "pre_questions": [], + "main_content": "Introduction Personalized education, an approach that caters to the unique learning needs of individuals, is increasingly becoming a key aspiration in educational technology [1, 2, 3]. With the advent of advanced Large Language Models (LLMs), this aspiration is inching closer to reality. LLMs, such as Llama [4] and GPT [5, 6] models, are playing a pivotal role in this domain, demonstrating significant potentials in various applications, including the simulation of student behavior [7] and learning patterns [8]. However, the road to leveraging LLMs for personalized education is challenging [9, 10, 11]. In this paper, we have identified regressive side effects arising from training LLMs to mimic student behavior. We find that training LLMs to replicate student misconceptions accurately is a double-edged sword. On one hand, it creates a model that can mimic student behavior, making it a potentially effective tool for personalized learning. On the other hand, it significantly compromises the model\u2019s factual integrity and reasoning ability. These regressive side effects are a critical issue, as the primary role of any educational model is to provide accurate and reliable information. To investigate this issue further, we conducted a comprehensive exploration involving training LLMs on a student-tutor dialogue dataset. This dataset, derived from the CLASS framework [12, 13, 2], comprises dialogues on biology questions sourced from college-level textbooks. An example of the student-tutor interaction from the dataset is illustrated in figure 1. It provides a realistic representation of student learning patterns, featuring student misconceptions and the tutor\u2019s rectifications. We used the dataset to train the latest Vicuna models (7B and 13B) [14], state-of-the-art Llama [4] variants, to mimic student responses. However, the training process significantly decreased the model\u2019s performance across various benchmark datasets, including the ARC reasoning challenge [15], TruthfulQA [16], Hallucination Evaluation Dialogue [17], and MemoTrap [18]. We present a detailed analysis across nine key benchmarks using the Eleuther AI Language arXiv:2404.15156v1 [cs.CL] 23 Apr 2024 Figure 1: This figure illustrates a typical student-tutor interaction from the CLASS [12] dataset. Unlike the CLASS methodology, which focuses on training a tutor model, our study trains a \u2018student model\u2019, with the LLM predicting student responses. This approach is motivated by the potential of personalized education, where understanding and mimicking student behavior can lead to more effective learning interventions. However, this approach, while effectively mimicking student misconceptions, leads to regressive side effects such as compromising the model\u2019s factual integrity and diminishing its reasoning abilities. The conversation shown here exemplifies this issue, where the student\u2019s response, while partially correct, contains misconceptions. To mitigate these side effects, we introduce hallucination tokens ([hal] and [/hal]) appended to student responses during training. These tokens instruct the model to switch between mimicking student misconceptions and providing factually accurate responses. Despite significant improvements, the technique does not fully restore the model\u2019s baseline performance, highlighting the complexity of the issue and the need for further research. Model Evaluation Harness [19], a widely used framework [20] which provides a thorough and fair assessment of generative models across a spectrum of reasoning and general knowledge tasks. We present a detailed analysis across nine key benchmarks using the Eleuther AI Language Model Evaluation Harness [19], a widely used framework [20] to test generative language models on a large number of different evaluation tasks. To further understand the regressive side effects, we conducted a control experiment to compare the model trained to predict tutor responses versus one trained to predict student responses. The results showed that training the LLM on tutor responses did not lead to any performance decline observed when mimicking student responses. This trends highlight that the regressive side effects are a unique challenge specific to training LLMs to replicate student misconceptions. To counteract the side effects, we propose to incorporate novel start and end hallucination tokens ([hal] and [/hal]) into the LLM training process. These tokens, placed at the beginning and end of each student response, serve as cues to the model, instructing it when to differentiate between providing accurate responses and replicating student misconceptions. Our results indicate a substantial improvement in the model\u2019s performance across all datasets after 2 introducing this token. However, these tokens do not fully restore the model\u2019s baseline performance, underscoring the complexity of the issue. Through our research, we have brought to the following critical contributions in the realm of personalized education leveraging LLMs: 1. We have uncovered and thoroughly investigated regressive side effects in the LLMs trained for student modeling. This research highlights the paradoxical challenge when LLMs are trained to mimic student misconceptions, potentially compromising their factual integrity and reasoning ability. 2. We have proposed hallucination tokens to mitigate these regressive effects. These tokens, added to the training process, instruct the LLMs to differentiate between mimicking student misconceptions and providing factually accurate responses, substantially improving the model\u2019s performance. 3. Despite the improvements achieved with the hallucination tokens, our research indicates that it does not fully counteract the regressive side effects. This points to the complexity of this issue and underscores the need for further research in this area. Our research marks a significant step towards understanding the complexities of using LLMs for student modeling. The findings and contributions of this study will fuel further exploration and innovation in this domain, ultimately refining the use of LLMs in personalized learning environments. Our methodology is divided into three main parts: data preparation, model training, and the incorporation of hallucination tokens. 3.1 Data Preparation The first step in our methodology involves preparing the dataset for training the LLMs. We denote the conversation dataset as D, which consists of ordered pairs of tutor-student conversational turns: D = {(x1, y1), (x2, y2), . . . , (xN, yN)}, where N is the total number of conversational turns. Each x represents a sequence of tutor utterances, and each corresponding y represents the student response. The dataset is derived from the CLASS framework [12], which provides a realistic representation of student learning patterns, featuring student misconceptions and the tutor\u2019s rectifications. This dataset provides a rich source of studenttutor dialogues on biology questions sourced from college textbooks. 3.2 Model Training The second step in our methodology involves training LLMs. The LLMs are designed to predict the next utterance given the previous conversational context. Unlike traditional approaches that focus on the correct responses typically output by a tutoring system, our model centers on student outputs, which may possess a mix of correctness and misconceptions. For an input sequence xi, the LLM aims to generate an output sequence \u02c6 yi that resembles a student\u2019s response. The language modeling loss for a single data pair is defined by the negative log likelihood: L(yi, \u02c6 yi) = \u2212 |yi| \ufffd t=1 || \ufffd t=1 log p (yi,t|xi, yi,