diff --git "a/related_34K/test_related_short_2405.00263v1.json" "b/related_34K/test_related_short_2405.00263v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2405.00263v1.json" @@ -0,0 +1,1434 @@ +[ + { + "url": "http://arxiv.org/abs/2405.00263v1", + "title": "Clover: Regressive Lightweight Speculative Decoding with Sequential Knowledge", + "abstract": "Large language models (LLMs) suffer from low efficiency as the mismatch\nbetween the requirement of auto-regressive decoding and the design of most\ncontemporary GPUs. Specifically, billions to trillions of parameters must be\nloaded to the GPU cache through its limited memory bandwidth for computation,\nbut only a small batch of tokens is actually computed. Consequently, the GPU\nspends most of its time on memory transfer instead of computation. Recently,\nparallel decoding, a type of speculative decoding algorithms, is becoming more\npopular and has demonstrated impressive efficiency improvement in generation.\nIt introduces extra decoding heads to large models, enabling them to predict\nmultiple subsequent tokens simultaneously and verify these candidate\ncontinuations in a single decoding step. However, this approach deviates from\nthe training objective of next token prediction used during pre-training,\nresulting in a low hit rate for candidate tokens. In this paper, we propose a\nnew speculative decoding algorithm, Clover, which integrates sequential\nknowledge into the parallel decoding process. This enhancement improves the hit\nrate of speculators and thus boosts the overall efficiency. Clover transmits\nthe sequential knowledge from pre-speculated tokens via the Regressive\nConnection, then employs an Attention Decoder to integrate these speculated\ntokens. Additionally, Clover incorporates an Augmenting Block that modifies the\nhidden states to better align with the purpose of speculative generation rather\nthan next token prediction. The experiment results demonstrate that Clover\noutperforms the baseline by up to 91% on Baichuan-Small and 146% on\nBaichuan-Large, respectively, and exceeds the performance of the previously\ntop-performing method, Medusa, by up to 37% on Baichuan-Small and 57% on\nBaichuan-Large, respectively.", + "authors": "Bin Xiao, Chunan Shi, Xiaonan Nie, Fan Yang, Xiangwei Deng, Lei Su, Weipeng Chen, Bin Cui", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Speculative Inference Since Speculative decoding for LLM first proposed in [13, 6], multiple optimization technologies has been studied. Tree attention was explored in [16] and widely applied for verifying multiple speculations in a single step. Several early works [12, 15, 17, 19, 28, 27, 11, 7] studied how to improve separated draft models, and some works [22, 10, 9] also explored training-free 9 draft model architecture, while more recent works such as [5, 3, 23, 8] also drew more attention to the integrated draft model. Clover is one of the extension based on such lightweight speculator. Regressive Speculator There are some recent approaches that also explore the potential superiority of the regressive speculator. Zhang et al. [26] use an MLP layer as a regressive block, and Hydra [2] also introduces an additional block in their implementation. Eagle [14] also introduces a regressive transformer block to speculate. Chimera [25] proposed Trigram Encoder and Full Context Encoder as regressive speculators. The primary distinction of Clover is the use of cross Attention Decoder and the exploration on Augmenting Block, with the aim of optimising the utilisation of sequential knowledge derived from both pre-specified tokens and the input sentence. In addition, Clover focuses on throughput improvement at larger batch sizes and smaller tree sizes, which has not been sufficiently addressed in previous speculative decoding work.", + "pre_questions": [], + "main_content": "Introduction Generative large language models (LLMs) [18, 1, 4], such as GPT, represent a significant breakthrough in artificial intelligence. They have demonstrated remarkable proficiency across a diverse range of applications, from composing creative literary works to generating human-like dialogues in chatbots. Their capability to understand and produce language has opened up new avenues for human-computer interaction, automating tasks that necessitate an understanding of context and nuance. However, despite their strong capabilities, LLMs also present significant challenges related to low generation efficiency on GPUs. Specifically, these models create text sequentially by generating one output token per step, responding to a user query in two distinct phases: the prefilling phase and the arXiv:2405.00263v1 [cs.CL] 1 May 2024 MLP block Attention block + + MLPi lm_head lm_headi next-0 token next-i token i-th Medusa Head \ud835\udc56\ud835\udc56= 1, 2, 3 stacked N times Transformer Block (a) Medusa Decoding Regressive Connection (Section 3.1) Augmenting Block (Section 3.3) MLP block Attention block + + lm_head next-0 token stacked N times Transformer Block Attention Decoder (Section 3.2) MLPi lm_head next-i token i-th Clover Head \ud835\udc56\ud835\udc56= 1, 2, 3 look up look up (b) Clover Decoding Figure 1: Overview of Medusa decoding and our extended Clover Decoding. decoding phase. (1) During the prefilling phase, the model processes all tokens in the input prompt or context in a single iteration to generate the initial output token. (2) During the decoding phase, the model, informed by the prompt/context and previously generated tokens, continues to produce subsequent output tokens one at a time through multiple iterations until the response is complete. Due to the decoding phase involving multiple rounds of generation, where each round processes only a small batch of tokens, the GPUs\u2019 computational resources are severely underutilized. Speculative decoding [13, 6] is an acceleration technique used to mitigate the performance issues in question. It increases computational density by generating multiple tokens in a single step while ensuring the outputs remain entirely consistent. Specifically, speculative decoding involves one or more lightweight draft models that speculate multiple subsequent tokens with negligible overhead. These speculations are then verified by the original target model, which generates multiple tokens in a single iteration. Speculative decoding generates multiple tokens based on initial speculations. The accuracy of these speculators is critical for decoding speed, while more complex speculators increase inference overhead, subsequently extending latency. Numerous studies [16, 15, 17, 19, 28, 27, 11] have explored enhancing latency and throughput using independent draft models as speculators. Additionally, recent discussions [5, 14, 3, 2, 26, 25, 8] have highlighted the advantages of integrated speculators, noting their lightweight nature and ease of deployment. The Medusa solution [5] leverages lightweight heads as speculators. As shown in Figure 1a, it features multiple parallel MLP (Multi-Layer Perceptron) layers that receive inputs from the hidden states of the last transformer block. Each layer is designed to predict a single subsequent token and utilizes a tree-based verification process to simultaneously generate multiple tokens and initiate new speculations. This lightweight head mechanism has led to substantial improvements in inference speed. However, Medusa still encounters several challenges that can hinder its performance. Firstly, the Medusa head consists of only a single MLP layer that takes input solely from the final hidden states. Each layer independently speculates on a word at a specified position beyond the next, disregarding the sequential dependencies from previously predicted tokens, which often results in decreased accuracy. Secondly, because the Medusa heads operate independently, the tokens they speculate are combined using a Cartesian product to form an exponentially large token tree. This approach can lead to suboptimal performance when the decoding phase is not constrained by memory, as it generates a surplus of redundant tokens, particularly as the batch size increases. Additionally, the absence of sequential information compromises the effectiveness of the tree pruning algorithm, further impacting performance. In real-time serving scenarios, where the inference batch size is typically large, speculative decoding often faces computational constraints, leading to performance degradation. Figure 2 illustrates this trend: speculative decoding substantially outperforms auto-regressive decoding when the number of computed tokens is low. However, as the token count increases, the speedup provided by speculative 2 t 1 2 4 8 16 32 64 Batch size 50 100 150 200 Token/sec Batch Size vs. Throughput Speculative Auto-regressive Figure 2: Throughput on a model with approximately 30B parameters, supposing speculation length is 5 with 0.4 acceptance rate. decoding reaches an inflection point and gradually diminishes due to computational limitations. Consequently, the actual size of the token tree in practice is usually smaller than what is assumed in previous studies. To address these issues, we introduce Clover, an enhancement of the Medusa framework. Clover incorporates a regressive attention block into the speculative phase and introduces the Regressive Connection (Section 3.1), Attention Decoder (Section 3.2), and Augmenting Block (Section 3.3). These components enable speculators to utilize additional sequential knowledge, enhancing their accuracy. Moreover, the regressive architecture not only improves the precision of the speculations but also generates a token tree with more comprehensive dependency information. We evaluate Clover in a setting that more closely resembles the real scenario, which involve various larger batch sizes and a smaller token tree. The results on Baichuan model family show that Clover method achieves a maximum throughput improvement of 2.56\u00d7 throughput improvement over vanilla decoding and 1.25\u00d7 1.43\u00d7 over Medusa decoding. Moreover, Clover demonstrates an 11.7% 26.4% improvement in accuracy on speculative heads, with a particularly notable increase of over 20% in the latter heads. Additionally, it generates 50% 76% more extra tokens (except the first) per step than the Medusa method, thanks to the regressive mechanism. To summarize, our contributions can be outlined as follows: \u2022 We propose Clover, a new speculative decoding algorithm which incorporates an additional auto-regressive attention block to facilitate the consideration of sequential knowledge. \u2022 We introduce three key components to improve the original parallel decoding algorithms, including the Regressive Connection for utilizing sequential information from previously speculated tokens, the Attention Decoder for combining the speculated tokens with current inputs, and the Augmenting Block for modifying the hidden states to better align with the purpose of speculative generation. \u2022 Evaluations are conducted on both Baichuan-Small (with 7B parameters) and BaichuanLarge (with over 100B parameters). And results show that our Clover achieved better efficiency compared to existing methods, such as Medusa. 2 Background 2.1 Speculative Decoding Speculative decoding [13, 6], depicted in Figure 3b, is an advanced technique that accelerates LLM inference by leveraging hardware computational resources more efficiently. This method distinguishes itself from traditional auto-regressive decoding by calculating and generating multiple tokens simultaneously in each iteration. At the core of speculative decoding lies a speculator component, usually a smaller model often referred to as the draft model, which predicts several subsequent tokens. This approach contrasts with 3 Output 1: \ud835\udc61\ud835\udc611 Target Model \ud835\udc61\ud835\udc610 \ud835\udc61\ud835\udc611 \ud835\udc61\ud835\udc612 \ud835\udc61\ud835\udc613 \ud835\udc61\ud835\udc614 \ud835\udc61\ud835\udc615 \u2026 \ud835\udc61\ud835\udc610 \ud835\udc61\ud835\udc611 \ud835\udc61\ud835\udc612 \ud835\udc61\ud835\udc613 \ud835\udc61\ud835\udc614 Step 1 Step 2 Step 3 Step 4 Step 5 Output 2: \ud835\udc61\ud835\udc612 Output 3: \ud835\udc61\ud835\udc613 Output 4: \ud835\udc61\ud835\udc614 Output 5: \ud835\udc61\ud835\udc615 (a) Auto-regressive Decoding Target Model Speculators (e.g. Draft Models) \ud835\udc61\ud835\udc610 \ud835\udc61\ud835\udc611\ud835\udc61\ud835\udc612\ud835\udc61\ud835\udc613 \u2026 \ud835\udc61\ud835\udc614\ud835\udc61\ud835\udc615 Step 1 Step 2 \ud835\udc61\ud835\udc611\ud835\udc61\ud835\udc612\ud835\udc61\ud835\udc613\u2032\ud835\udc61\ud835\udc614\u2032 \ud835\udc61\ud835\udc614\ud835\udc61\ud835\udc615\u2032\ud835\udc61\ud835\udc616\u2032\ud835\udc61\ud835\udc617\u2032 (Verification) (Speculation) Step \u2026 Speculation 1: Verification 1: Output 1: \ud835\udc61\ud835\udc611 \ud835\udc61\ud835\udc612 \ud835\udc61\ud835\udc613\u2032 \ud835\udc61\ud835\udc614\u2032 Rejected Accepted Speculation 2: Verification 2: Output 2: \ud835\udc61\ud835\udc611 \ud835\udc61\ud835\udc612 \ud835\udc61\ud835\udc613 \ud835\udc61\ud835\udc614 \ud835\udc61\ud835\udc615\u2032 \ud835\udc61\ud835\udc616\u2032 \ud835\udc61\ud835\udc617\u2032 Rejected Accepted \ud835\udc61\ud835\udc614 \ud835\udc61\ud835\udc615 (match 2 tokens) (match 1 token) (b) Speculative Decoding (maximal speculation length is 4) Figure 3: The comparison between Auto-regressive Decoding and Speculative Decoding. Speculative Decoding may generate multiple tokens in a single step based on the speculation, thus achieves less decoding iteration and lower inference latency. Prompt: We should apply Output: Bayesian Speculations: 1. [Bayesian] optimization algorithm to 2. [Bayesian] optimization with adaptive 3. [Bayesian] network model to Inputs: Bayesian optimization algorithm to with adaptive network model to Bayesian optimization algorithm to with adaptive network model to optimization algorithm to with adaptive network model to Bayesian Token Tree: prefix_match CasualMask for Tree Attention 1 with dependency 0 without dependency Figure 4: A demonstration of Tree Attention in Speculative Decoding. Multiple speculations are merged by prefix matching to form a tree, and its topology dependency is represented in a 2-D matrix as the casual mask in Attention computation. auto-regressive decoding, where only the last generated token is fed into the system. In speculative decoding, the original LLM (the target model) receives all speculated tokens as input. This allows the target model to compute attention scores and derive logits over multiple tokens effectively, ensuring that it can generate consistent outputs within a single iteration. This stage is termed the verification phase, during which the target model screens out any incorrect tokens from the speculations. As a result, speculative inference can produce equivalent outputs with fewer decoding steps, thereby enhancing latency efficiency. 2.2 Tree Attention Tree Attention [16] is utilized to calculate attention scores for multiple speculations in parallel. By applying prefix matching to various speculated sequences, the speculation results are organized into a token tree, which is represented as a 2-D matrix (Figure 4). It is important to note that the attention block is the only component within the modern LLM architecture that requires knowledge of sequential dependency. The scoring of tree-structured tokens is a relatively straightforward task and can be achieved by configuring the attention\u2019s Causal-Mask to align with the topological matrix. Tree Attention facilitates the integration of multiple speculations with minimal computational overhead, a feature widely implemented in many speculative decoding systems such as [10, 24, 20]. 2.3 Medusa Decoding Figure 1a illustrates the Medusa architecture [5], which features several independent and parallel MLP heads. Each of these heads, designated as the i-th head, is specifically fine-tuned to predict the next-i token following the actual output token during each iteration. These lightweight heads constitute the speculator component of the Medusa system, seamlessly integrated into the target model. This integration allows for simultaneous speculation and verification within the decoding process. The design of these heads enables Medusa to effectively manage the balance between 4 Regressive Connection (Section 3.1) Augmenting Block (Section 3.3) MLP block Attention block + + lm_head next-0 token stacked N times Transformer Block Attention Decoder (Section 3.2) MLPi lm_head MLP block [optional] Attention block next-i token \ud835\udc52\ud835\udc520 \ud835\udc52\ud835\udc52\ud835\udc56\ud835\udc56 \ud835\udc52\ud835\udc52\ud835\udc56\ud835\udc56\u22121 \u210e\ud835\udc56\ud835\udc56 \u210e\ud835\udc56\ud835\udc56\u22121 i-th Clover Head \ud835\udc56\ud835\udc56= 1, 2, 3 Q K V RMS Norm * \u00b7 \ud835\udc52\ud835\udc52\ud835\udc56\ud835\udc56\u22121 \u210e\ud835\udc56\ud835\udc56\u22121 + \u210e\ud835\udc56\ud835\udc56 The i-th step of Attention Decoder look up look up SiLU Figure 5: Detailed architecture design of Clover. computational efficiency and predictive accuracy, ensuring that each token generated contributes optimally to the overall sequence coherence and context relevance. 3 Clover Design Figure 5 shows how Clover is integrated into existent LLM as the speculator. Clover introduces three incremental components to leverage sequential knowledge: Regressive Connection, Attention Decoder and Augmenting Block. The Regressive Connection enables sequential dependency from preceding speculated tokens to be considered when a speculator generating the next token. The Attention Decoder is the factual regressive block in Clover, combining the hidden states from the last transformer block and previously speculated token, merging sequential knowledge between pre-speculated tokens and the entire input sentence. While the Augmenting Block is an additional transformer or self-attention block appended to the target model, used for enhancing sequence features to improve speculator accuracy. 3.1 Regressive Connection Each Medusa head is responsible for speculating the token at the specified location, without considering the pre-generated speculation, as shown in Figure 1a. Although such independence enables multiple heads compute in parallel , the neglect of sequence dependencies limits the hit rate of speculation, and further increases the inference latency. Clover applies regressive connection to the speculator, depicted as the blue dotted lines in Figure 5. The embedding vectors of current speculated tokens will be regressively used to predict the token at the next position. Introducing such sequential dependency knowledge offers two benefits: Firstly, speculative heads are able to generate predictions more accurately with previous tokens known, thus decrease inference latency. Although the critical path of the computation becomes proportional to the depth of the speculation and loses a certain amount of parallelism, the increase in speculation accuracy rather improves the overall latency. Secondly, since every speculated token in the latter position has one token in its previous location as the precursor, the token tree for verification phase can have greater information density. In contrast to the exponentially sized token tree that result from the independence of words at each position, smaller token tree with sequence-dependency information is easy for pruning and less likely to meet computation bound on modern GPUs, while introducing negligible information loss. 5 3.2 Attention Decoder Clover introduces cross attention decoder as the actual regressive block. The decoder takes two vectors as inputs: the embedding vector from the previous token, and the hidden states throughout the speculation. Specifically, considering the computation flow on the i-th head, to generate the next-i token (denoted as toki), the inputs of cross attention decoder are: the normalized embedding vector of the token toki\u22121 (denoted as ei\u22121), and the hidden states from the last speculation step (denoted a hi\u22121). The computation can be formulated as follows: Qi = WQ \u00b7 normalize(hi\u22121), Ki = WK \u00b7 ei\u22121, Vi = WV \u00b7 ei\u22121, (1) hi = hi\u22121 + Attention(Qi, Ki, Vi), (2) , where hi is the output of the cross attention decoder, fed into the corresponding MLP layer to generate toki. For the first head, the hidden states h0 comes from the last transformer block of the target LLM model (or the Augmenting Block, see below), and the embedding e0 is from the next-0 token t0 generated by the target model. The hidden states hi is recursively propagated throughout the entire speculation phase, piggybacking the features from the entire input sentence. The role of Clover\u2019s Attention Decoder is combining and resolving the information from both input sentence and the previous speculated tokens, assisting the succeeding MLP layer to speculate token at present position with more sequential knowledge. We also explore the effectiveness of using the MLP layer as a regressive block, but get sub-optimal performance (more details in Section 4.3), this is probably because simply concatenating the two input vectors makes it harder to learn and extract valid features. Clover\u2019s Attention Decoder has negligible overhead due to the fact that the inputs are only two vectors per request or beam. 3.3 Augmenting Block The original target LLM is pre-trained for just predicting the next token. To extract more information for speculators to predict more succeeding tokens, we append an additional transformer block to augment features from the entire input sentence. The output of this additional augmenting block (i.e. h0) is fed into Attention Decoder. Introducing such a whole layer incurs just a small computation overhead (e.g. approximately 1/Nlayer of inference time), while the accuracy gain from the augmenting block outweighs the time it consumes. We explore different architectures to build this augmenting block,and find the phenomenon that the attention block contributes the largest accuracy gain to all the speculative heads. We also notice that the MLP block only adds approximately 1% accuracy gain, so we leave the MLP layer in Augmenting Block optional. We still add the MLP block in our Clover implementation since it does indeed increase accuracy, while incurring only negligible overhead. Concerning evaluation results are discussed in Section 4.3. 3.4 Other Details Each medusa head equipped with an individual LM head, containing a large amount of parameters (i.e. the hidden size multiplies the vocabulary size) and make it more time-consuming for training. In Clover, all the speculative heads share the original LM head in the target model. Furthermore, in regressive connection, the embedding vector of last generated token is given by LM head as well (the look up arrow in Figure 1b). Specifically, the embedding vector ei is given by: the one-hot vector of token ti multiplied by the transposed normalized weight matrix in the LM head. Compared with looking up from the embedding table, we believe such embedding distribution is much closer to the hidden states from the last transformer block, where the weights are used to initialize the augmenting block, reducing the difficulty of fine-tuning. 4 Evaluation 4.1 Experiment Settings Models and baselines Both the Medusa and Clover approaches are employed on the Baichuan Small (with 7B parameters) and Baichuan Large (with over 100B parameters) models [21] with 6 the number of lm head is 3, named as Medusa(Baichuan) and CloverBaichuan, respectively. In order to ensure the fairness of the comparison, the same inference engine, tree construction and tree sampling algorithm are used for all scenarios. We also evaluate auto-regressive decoding under the same circumstances. Dataset We employ the Baichuan internal supervised fine-tuning (SFT) dataset, containing approximately 0.15B tokens, 95% of which are Chinese, to train both Medusa(Baichuan) and Clover (Baichuan). We then evaluate inference performance on another internal Baichuan dataset, which consists of a variety of tasks: retrieval augmentation(RA), multi-turn conversation(MC), code(Code), information process(IP), creation(CA), logical reasoning(RS), math(Math), tabular(Tab), question answering(QA) and medical suggestion(Med). Each of the ten tasks contains 100 dialogues. Training Both models are trained with all weights frozen in the target model . For Medusa(Baichuan), the initial weight settings correspond to the configuration given in the Medusa technical report [5]. While for Clover (Baichuan), the initial weights in the Augmenting Block are identical to the last transformer block in the target model, and the initialization of the MLP layer is the same as in Medusa\u2019s method. For the Attention Decoder, the weights of Q and K are initialized with identical matrix with Gaussian noise added, while the V matrix is set to all zero. We train the heads for 1 epoch, with (\u03b21 = 0.9, \u03b22 = 0.999) for the AdamW optimizer. The learning rate1 is set to 1e-3 for Baichuan Small, and 6e-4 for Baichuan Large. For both models equipped with Clover, the trainable parameters are approximately 0.2B and 2B, taking 2 hours to train on 8x A800 NVIDIA GPU and 32x H800 NVIDIA, respectively. Metrics We choose tokens/step and tokens/second as our main metrics, followed by prior speculative decoding works. The former metric measures the accepted length, indicating the accuracy of speculators, while the latter metric reports the overall system throughput. We report top-k accuracy of each head in ablation study to gain more intuitive insight into diverse model architecture. 4.2 End-to-end Results RA MC Code IP CA RS Math Tab QA Med 0.0 0.5 1.0 1.5 2.0 #Extra tokens per step 1.58x1.62x 1.60x 1.59x1.66x 1.59x 1.60x 1.50x 1.65x 1.66x Baichuan Small RA MC Code IP CA RS Math Tab QA Med 1.59x 1.65x 1.59x 1.63x1.76x 1.66x 1.51x 1.54x 1.71x1.55x Baichuan Large Clover(Baichuan) Medusa(Baichuan) Figure 6: Number of extra generated tokens (excluding the first one) per step on various tasks. We evaluate the end-to-end performance at different batch sizes. As mentioned in Section 1, in real-time serving environment, system need to compute requests with large batch sizes and easily meet the computational bound. Thus we set token tree size to 4 for both speculative decoding methods. We also investigate and find that further expansion of the tree sampling size leads to marginal effects (see Appendix A.2). Figure 6 illustrates the average number of tokens generated per step for Clover and Medusa methods on different tasks. Note that the value on the vertical axis is the extra tokens per step, excluding the actual token generated by target model, which more accurately reflects the performance of the speculator. Clover generates 50% 76% more extra tokens per step than Medusa method on all tasks, highlighting its superiority over Medusa architecture in terms of speculator accuracy. 1Cosine decay is applied to the learning rate. 7 Model Task Approach Tokens/second and Improvement over Vanilla Decoding Size bs=4 bs=8 bs=16 bs=24 bs=32 bs=48 Small CA Clover (Baichuan) 195.9 373.1 615.5 872.0 1035.2 1352.3 +44% +40% +22% +19% +14% -39% Medusa (Baichuan) 160.3 343.6 554.3 778.7 948.2 1246.0 +18% +28% +10% +6% +4% \u221243% Vanilla 135.5 266.4 501.3 731.6 903.2 2217.4 Math Clover (Baichuan) 232.2 411.3 673.3 988.4 1137.7 1462.8 +91% +76% +57% +67% -5% -21% Medusa (Baichuan) 187.6 342.4 645.2 786.8 974.4 1333.2 +54% +46% +50% +32% \u221218% \u221228% Vanilla 121.5 233.5 428.5 591.6 1202.4 1874.0 Large CA Clover (Baichuan) 169.3 290.2 488.5 638.4 754.4 938.2 +86% +55% +34% +23% +22% +5% Medusa (Baichuan) 132.3 222.1 373.3 486.8 581.3 638.0 +45% +19% +2% \u22125% \u22125% \u221228% Vanilla 90.7 186.2 362.8 515.5 615.8 887.8 Math Clover (Baichuan) 207.3 342.1 549.5 715.4 874.3 1067.8 +146% +103% +81% +62% +63% +36% Medusa (Baichuan) 159.1 269.8 401.3 557.0 705.7 781.8 +89% +60% +32% +26% +31% +0% Vanilla 84.2 168.3 302.9 440.9 535.2 780.9 Table 1: End-to-end throughput on Baichuan Small and Baichuan Large with different decoding methods on two tasks, where bs in the head means batch size, and Vanilla refers to auto-regressive decoding. Results for other tasks are shown in Appendex A.1. The end-to-end throughput (i.e. tokens/second) results are shown in Table 1. Both Clover and Medusa speculative decoding methods outperform auto-regressive decoding (at most 2.05\u00d7 2.56\u00d7 in terms of throughput on Baichuan Large model) due to effective hardware utilisation in most scenarios. We also find that the advantage of both speculation decoding methods generally diminishes with increasing batch size. This is because speculative decoding with larger batch size is getting closer to the computational bounded. The performance fluctuation is due to unpredictable random factors during the inference and a not fully optimized implementation of our engine. More results for other tasks can be found in Appendix A.1, Clover (Baichuan) still retains its best performance over Medusa(Baichuan) and auto-regressive decoding in all categories. Moreover, Clover decoding generates more tokens per step and achieves higher throughput than Medusa decoding in all scenarios (at most 1.26\u00d7 and 1.47\u00d7 for Baichuan Small and Baichuan Large, respectively2), because the gain in head accuracy from the additional components proposed in Clover outweighs their computational overhead. The advantages of our system over Medusa are even more pronounced for larger model sizes, as the speculator module makes up a smaller proportion of the overall model. The sequential knowledge from pre-generated speculation tokens helps the current head to predict next speculation token more accurately, especially when speculating a long multi-token phrase that appears first in the first head, but not at the next-0 token (i.e. the actual output token). 4.3 Ablation Study In Ablation study, we use top-k accuracy of each head as the metrics to intuitively understand how each component affects accuracy. 2The value is the ratio of Clover to Medusa throughput. 8 top-1 to 1 top-2 to 2 top-3 to 3 top-5 to 5 top-1 to 1 top-2 to 2 top-3 to 3 top-5 to 5 top-1 to 1 top-2 to 2 top-3 to 3 top-5 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Accuracy 89.2% Ab -4.8% -7.4% -11.7% Head 1 81.4% -9.0% -13.6% -21.7% Head 2 75.4% -11.5% -17.5% -11.5% -17.5% -26.4% Head 3 2% Ablation Study on Components Clover(Baichuan) Regressive Connection Attention Decoder Augmenting Block (a) Ablation Study on Components top-1 to 1 top-2 to 2 top-3 to 3 top-5 to 5 top-1 to 1 top-2 to 2 top-3 to 3 top-5 to 5 top-1 to 1 top-2 to 2 top-3 to 3 top-5 0.5 0.6 0.7 0.8 0.9 Accuracy -1.0% -1.9% -1.0% -1.9% -4.9% -1.9% -4.9% Head 1 -1.2% -4.4% -1.2% -4.4% -9.0% -4.4% -9.0% Head 2 -0.7% -5.1% -0.7% -5.1% -9.4% -5.1% -9.4% Head 3 Exploraion on Augmenting Block Transformer (Clover) MLP Only Attention Only No Augmenting (b) Exploration on Augmenting Block Figure 7: Ablation study on Baichuan Small model. Ablations on Components We start from complete Clover (Baichuan) and gradually remove: Attention Decoder, Regressive Connection and Augmenting Block. Note that after removing all the three components, the model architecture becomes identical to the Medusa(Baichuan). Figure 7a shows the accuracy of Clover (Baichuan) with difference components enabled. By removing Attention Decoder, taking the MLP layer as the regressive block instead, the top-5 accuracy of the three heads reduces by (4.8%, 9.0%, 11.5%). This is because MLP layer itself will not distinguish different embedding vectors, make it hard to learn valid sequential knowledge from the entire sentence. If we further disable the Regressive Connection, the accuracy decreases as well (e.g. 2.6%, 4.6%, 6.0% top-5 further accuracy loss from the three heads). The removal of regressive connection indicates the removal of sequential dependence knowledge of pre-generated tokens, resulting in accuracy loss. Finally, Clover (Baichuan) becomes Medusa(Baichuan) after further disabling Augmenting Block, losing additional (4.3%, 8.1%, 8.9%) top-5 accuracy from all heads, respectively. The missing of augmenting sequential knowledge makes it harder for succeeding to perform speculation. Overall comparing Clover (Baichuan) with Medusa(Baichuan), Clover approach brings sequential knowledge from pre-generated speculative tokens as well as the input sentence, improving performance of all speculative heads, especially the latter two. The same observation is also confirmed in Figure 7b. Exploration on Augmenting Block We further explore the potential variants of Augmenting Block, including: a whole transformer block (the actual architecture in Clover (Baichuan)), attention block only , MLP block only and no Augmenting Block. Figure 7b shows accuracy of Clover with different types of Augmenting Block equipped. The transformer block contributes (4.9%, 9.0%, 9.4%) top-5 accuracy to the heads compared with no augmenting block, in which the attention block plays the major role (increasing 1.9%, 4.4%, 5.1% top-5 accuracy when only attention block enabled for Augmenting Block). While the MLP block only provides (1.0%, 1.2%, 0.7%) accuracy improvement, thus we leave it as an optional component. The attention mechanism focuses on extracting relationships between tokens in the sentence, making it easier to learn for feature augmentation. Although the performance gain from incorporating MLP is not significant, we chose to enable it in order to improve prediction accuracy, as it has minimal time and memory overhead. We present Clover, an extension of the Medusa method that considers sequential knowledge in speculation generation. Clover exploits sequential knowledge from pre-generated speculative tokens (Section 3.1), the entire input sentence (Section 3.3) and their combination (Section 3.2), achieving 11.7% 26.4% more top-5 accuracy for speculative heads and a 1.26\u00d7 1.47\u00d7 throughput improvement when deploying Clover on Baichuan Large model compared with Medusa method (with 50% 76% more speculative tokens accepted), and at most 2.56\u00d7 with vanilla auto-regressive decoding. The main contribution to accuracy comes from the latter heads (+21.7% +26.4%) compared to +11.7% for the first head. Such evidence support the view that the auto-regressive mechanism is an effective approach to improve the accuracy of speculation." + }, + { + "url": "http://arxiv.org/abs/2401.15077v2", + "title": "EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty", + "abstract": "Autoregressive decoding makes the inference of Large Language Models (LLMs)\ntime-consuming. In this paper, we reconsider speculative sampling and derive\ntwo key observations. Firstly, autoregression at the feature\n(second-to-top-layer) level is more straightforward than at the token level.\nSecondly, the inherent uncertainty in feature (second-to-top-layer) level\nautoregression constrains its performance. Based on these insights, we\nintroduce EAGLE (Extrapolation Algorithm for Greater Language-model\nEfficiency), a simple yet highly efficient speculative sampling framework. By\nincorporating a token sequence advanced by one time step, EAGLE effectively\nresolves the uncertainty, enabling precise second-to-top-layer feature\nprediction with minimal overhead. We conducted comprehensive evaluations of\nEAGLE, including all models from the Vicuna and LLaMA2-Chat series, the MoE\nmodel Mixtral 8x7B Instruct, and tasks in dialogue, code generation,\nmathematical reasoning, and instruction following. For LLaMA2-Chat 70B, EAGLE\nachieved a latency speedup ratio of 2.7x-3.5x, doubled throughput, while\nmaintaining the distribution of the generated text.", + "authors": "Yuhui Li, Fangyun Wei, Chao Zhang, Hongyang Zhang", + "published": "2024-01-26", + "updated": "2024-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.08461v2", + "title": "DistillSpec: Improving Speculative Decoding via Knowledge Distillation", + "abstract": "Speculative decoding (SD) accelerates large language model inference by\nemploying a faster draft model for generating multiple tokens, which are then\nverified in parallel by the larger target model, resulting in the text\ngenerated according to the target model distribution. However, identifying a\ncompact draft model that is well-aligned with the target model is challenging.\nTo tackle this issue, we propose DistillSpec that uses knowledge distillation\nto better align the draft model with the target model, before applying SD.\nDistillSpec makes two key design choices, which we demonstrate via systematic\nstudy to be crucial to improving the draft and target alignment: utilizing\non-policy data generation from the draft model, and tailoring the divergence\nfunction to the task and decoding strategy. Notably, DistillSpec yields\nimpressive 10 - 45% speedups over standard SD on a range of standard\nbenchmarks, using both greedy and non-greedy sampling. Furthermore, we combine\nDistillSpec with lossy SD to achieve fine-grained control over the latency vs.\ntask performance trade-off. Finally, in practical scenarios with models of\nvarying sizes, first using distillation to boost the performance of the target\nmodel and then applying DistillSpec to train a well-aligned draft model can\nreduce decoding latency by 6-10x with minimal performance drop, compared to\nstandard decoding without distillation.", + "authors": "Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Kumar, Jean-Fran\u00e7ois Kagy, Rishabh Agarwal", + "published": "2023-10-12", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.15758v2", + "title": "Chimera: A Lossless Decoding Method for Accelerating Large Language Models Inference by Fusing all Tokens", + "abstract": "Large language models (LLMs) have demonstrated remarkable capabilities across\nvarious tasks. However, their widespread application is hindered by the\nresource-intensive decoding process. To address this challenge, current\napproaches have incorporated additional decoding heads to enable parallel\nprediction of multiple subsequent tokens, thereby achieving inference\nacceleration. Nevertheless, the accuracy of these decoding heads falls short of\nthe auto-regressive decoding approach.\n In light of these limitations, we propose Chimera, a novel framework\nspecifically designed for speculative sampling. Within this framework, we\nintroduce a lightweight draft model that effectively utilizes previously\ngenerated tokens to predict subsequent words. To ensure both accuracy and\nefficiency, we present two strategies within the lightweight draft model.\nFirstly, we focus on capturing short-range dependencies at the bottom layer.\nSecondly, we leverage the readily available representations from the original\nLLM.Through empirical evaluation on the Vicuna and LlaMA-2 series, Chimera\ndemonstrates impressive results, achieving an average latency speedup ratio of\n2.7x compared to the vanilla auto-regressive decoding approach. This highlights\nthe potential of our proposed framework in significantly improving the\nefficiency of large language models during the decoding process.", + "authors": "Ziqian Zeng, Jiahong Yu, Qianshi Pang, Zihao Wang, Huiping Zhuang, Hongen Shao, Xiaofeng Zou", + "published": "2024-02-24", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.10774v1", + "title": "Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads", + "abstract": "The inference process in Large Language Models (LLMs) is often limited due to\nthe absence of parallelism in the auto-regressive decoding process, resulting\nin most operations being restricted by the memory bandwidth of accelerators.\nWhile methods such as speculative decoding have been suggested to address this\nissue, their implementation is impeded by the challenges associated with\nacquiring and maintaining a separate draft model. In this paper, we present\nMedusa, an efficient method that augments LLM inference by adding extra\ndecoding heads to predict multiple subsequent tokens in parallel. Using a\ntree-based attention mechanism, Medusa constructs multiple candidate\ncontinuations and verifies them simultaneously in each decoding step. By\nleveraging parallel processing, Medusa introduces only minimal overhead in\nterms of single-step latency while substantially reducing the number of\ndecoding steps required.\n We present two levels of fine-tuning procedures for Medusa to meet the needs\nof different use cases: Medusa-1: Medusa is directly fine-tuned on top of a\nfrozen backbone LLM, enabling lossless inference acceleration. Medusa-2: Medusa\nis fine-tuned together with the backbone LLM, enabling better prediction\naccuracy of Medusa heads and higher speedup but needing a special training\nrecipe that preserves the backbone model's capabilities.\n Moreover, we propose several extensions that improve or expand the utility of\nMedusa, including a self-distillation to handle situations where no training\ndata is available and a typical acceptance scheme to boost the acceptance rate\nwhile maintaining generation quality. We evaluate Medusa on models of various\nsizes and training procedures. Our experiments demonstrate that Medusa-1 can\nachieve over 2.2x speedup without compromising generation quality, while\nMedusa-2 further improves the speedup to 2.3-3.6x.", + "authors": "Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao", + "published": "2024-01-19", + "updated": "2024-01-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.02057v1", + "title": "Break the Sequential Dependency of LLM Inference Using Lookahead Decoding", + "abstract": "Autoregressive decoding of large language models (LLMs) is memory bandwidth\nbounded, resulting in high latency and significant wastes of the parallel\nprocessing power of modern accelerators. Existing methods for accelerating LLM\ndecoding often require a draft model (e.g., speculative decoding), which is\nnontrivial to obtain and unable to generalize. In this paper, we introduce\nLookahead decoding, an exact, parallel decoding algorithm that accelerates LLM\ndecoding without needing auxiliary models or data stores. It allows trading\nper-step log(FLOPs) to reduce the number of total decoding steps, is more\nparallelizable on single or multiple modern accelerators, and is compatible\nwith concurrent memory-efficient attention (e.g., FlashAttention). Our\nimplementation of Lookahead decoding can speed up autoregressive decoding by up\nto 1.8x on MT-bench and 4x with strong scaling on multiple GPUs in code\ncompletion tasks. Our code is avialable at\nhttps://github.com/hao-ai-lab/LookaheadDecoding", + "authors": "Yichao Fu, Peter Bailis, Ion Stoica, Hao Zhang", + "published": "2024-02-03", + "updated": "2024-02-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.01318v1", + "title": "Accelerating Large Language Model Decoding with Speculative Sampling", + "abstract": "We present speculative sampling, an algorithm for accelerating transformer\ndecoding by enabling the generation of multiple tokens from each transformer\ncall. Our algorithm relies on the observation that the latency of parallel\nscoring of short continuations, generated by a faster but less powerful draft\nmodel, is comparable to that of sampling a single token from the larger target\nmodel. This is combined with a novel modified rejection sampling scheme which\npreserves the distribution of the target model within hardware numerics. We\nbenchmark speculative sampling with Chinchilla, a 70 billion parameter language\nmodel, achieving a 2-2.5x decoding speedup in a distributed setup, without\ncompromising the sample quality or making modifications to the model itself.", + "authors": "Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, John Jumper", + "published": "2023-02-02", + "updated": "2023-02-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.17192v2", + "title": "Fast Inference from Transformers via Speculative Decoding", + "abstract": "Inference from large autoregressive models like Transformers is slow -\ndecoding K tokens takes K serial runs of the model. In this work we introduce\nspeculative decoding - an algorithm to sample from autoregressive models faster\nwithout any changes to the outputs, by computing several tokens in parallel. At\nthe heart of our approach lie the observations that (1) hard language-modeling\ntasks often include easier subtasks that can be approximated well by more\nefficient models, and (2) using speculative execution and a novel sampling\nmethod, we can make exact decoding from the large models faster, by running\nthem in parallel on the outputs of the approximation models, potentially\ngenerating several tokens concurrently, and without changing the distribution.\nOur method can accelerate existing off-the-shelf models without retraining or\narchitecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration\ncompared to the standard T5X implementation, with identical outputs.", + "authors": "Yaniv Leviathan, Matan Kalman, Yossi Matias", + "published": "2022-11-30", + "updated": "2023-05-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.13581v1", + "title": "PaSS: Parallel Speculative Sampling", + "abstract": "Scaling the size of language models to tens of billions of parameters has led\nto impressive performance on a wide range of tasks. At generation, these models\nare used auto-regressively, requiring a forward pass for each generated token,\nand thus reading the full set of parameters from memory. This memory access\nforms the primary bottleneck for generation and it worsens as the model size\nincreases. Moreover, executing a forward pass for multiple tokens in parallel\noften takes nearly the same time as it does for just one token. These two\nobservations lead to the development of speculative sampling, where a second\nsmaller model is used to draft a few tokens, that are then validated or\nrejected using a single forward pass of the large model. Unfortunately, this\nmethod requires two models that share the same tokenizer and thus limits its\nadoption. As an alternative, we propose to use parallel decoding as a way to\ndraft multiple tokens from a single model with no computational cost, nor the\nneed for a second model. Our approach only requires an additional input token\nthat marks the words that will be generated simultaneously. We show promising\nperformance (up to $30\\%$ speed-up) while requiring only as few as $O(d_{emb})$\nadditional parameters.", + "authors": "Giovanni Monea, Armand Joulin, Edouard Grave", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.04623v1", + "title": "Accelerating LLM Inference with Staged Speculative Decoding", + "abstract": "Recent advances with large language models (LLM) illustrate their diverse\ncapabilities. We propose a novel algorithm, staged speculative decoding, to\naccelerate LLM inference in small-batch, on-device scenarios. We address the\nlow arithmetic intensity of small-batch inference by improving upon previous\nwork in speculative decoding. First, we restructure the speculative batch as a\ntree, which reduces generation costs and increases the expected tokens per\nbatch. Second, we add a second stage of speculative decoding. Taken together,\nwe reduce single-batch decoding latency by 3.16x with a 762M parameter GPT-2-L\nmodel while perfectly preserving output quality.", + "authors": "Benjamin Spector, Chris Re", + "published": "2023-08-08", + "updated": "2023-08-08", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.11809v2", + "title": "Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding", + "abstract": "This research aims to accelerate the inference speed of large language models\n(LLMs) with billions of parameters. We propose \\textbf{S}mart \\textbf{P}arallel\n\\textbf{A}uto-\\textbf{C}orrect d\\textbf{E}coding (SPACE), an innovative\napproach designed for achieving lossless acceleration of LLMs. By integrating\nsemi-autoregressive inference and speculative decoding capabilities, SPACE\nuniquely enables autoregressive LLMs to parallelize token generation and\nverification. This is realized through a specialized semi-autoregressive\nsupervised fine-tuning process that equips existing LLMs with the ability to\nsimultaneously predict multiple tokens. Additionally, an auto-correct decoding\nalgorithm facilitates the simultaneous generation and verification of token\nsequences within a single model invocation. Through extensive experiments on a\nrange of LLMs, SPACE has demonstrated inference speedup ranging from 2.7x-4.0x\non HumanEval-X while maintaining output quality.", + "authors": "Hanling Yi, Feng Lin, Hongbin Li, Peiyang Ning, Xiaotian Yu, Rong Xiao", + "published": "2024-02-19", + "updated": "2024-04-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.12072v2", + "title": "SPEED: Speculative Pipelined Execution for Efficient Decoding", + "abstract": "Generative Large Language Models (LLMs) based on the Transformer architecture\nhave recently emerged as a dominant foundation model for a wide range of\nNatural Language Processing tasks. Nevertheless, their application in real-time\nscenarios has been highly restricted due to the significant inference latency\nassociated with these models. This is particularly pronounced due to the\nautoregressive nature of generative LLM inference, where tokens are generated\nsequentially since each token depends on all previous output tokens. It is\ntherefore challenging to achieve any token-level parallelism, making inference\nextremely memory-bound. In this work, we propose SPEED, which improves\ninference efficiency by speculatively executing multiple future tokens in\nparallel with the current token using predicted values based on early-layer\nhidden states. For Transformer decoders that employ parameter sharing, the\nmemory operations for the tokens executing in parallel can be amortized, which\nallows us to accelerate generative LLM inference. We demonstrate the efficiency\nof our method in terms of latency reduction relative to model accuracy and\ndemonstrate how speculation allows for training deeper decoders with parameter\nsharing with minimal runtime overhead.", + "authors": "Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Hasan Genc, Kurt Keutzer, Amir Gholami, Sophia Shao", + "published": "2023-10-18", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.05109v1", + "title": "Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding", + "abstract": "To combat the memory bandwidth-bound nature of autoregressive LLM inference,\nprevious research has proposed the speculative decoding framework. To perform\nspeculative decoding, a small draft model proposes candidate continuations of\nthe input sequence, that are then verified in parallel by the base model. One\nway to specify the draft model, as used in the recent Medusa decoding\nframework, is as a collection of light-weight heads, called draft heads, that\noperate on the base model's hidden states. To date, all existing draft heads\nhave been sequentially independent, meaning that they speculate tokens in the\ncandidate continuation independently of any preceding tokens in the candidate\ncontinuation. In this work, we propose Hydra heads, a sequentially dependent,\ndrop-in replacement for standard draft heads that significantly improves\nspeculation accuracy. Decoding with Hydra heads improves throughput compared to\nMedusa decoding with standard draft heads. We further explore the design space\nof Hydra head training objectives and architectures, and propose a\ncarefully-tuned Hydra head recipe, which we call Hydra++, that improves\ndecoding throughput by 1.31x and 2.71x compared to Medusa decoding and\nautoregressive decoding, respectively. Overall, Hydra heads are a simple\nintervention on standard draft heads that significantly improve the end-to-end\nspeed of draft head based speculative decoding.", + "authors": "Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan Ragan-Kelley, William Brandon", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.09919v2", + "title": "Recurrent Drafter for Fast Speculative Decoding in Large Language Models", + "abstract": "In this paper, we introduce an improved approach of speculative decoding\naimed at enhancing the efficiency of serving large language models. Our method\ncapitalizes on the strengths of two established techniques: the classic\ntwo-model speculative decoding approach, and the more recent single-model\napproach, Medusa. Drawing inspiration from Medusa, our approach adopts a\nsingle-model strategy for speculative decoding. However, our method\ndistinguishes itself by employing a single, lightweight draft head with a\nrecurrent dependency design, akin in essence to the small, draft model uses in\nclassic speculative decoding, but without the complexities of the full\ntransformer architecture. And because of the recurrent dependency, we can use\nbeam search to swiftly filter out undesired candidates with the draft head. The\noutcome is a method that combines the simplicity of single-model design and\navoids the need to create a data-dependent tree attention structure only for\ninference in Medusa. We empirically demonstrate the effectiveness of the\nproposed method on several popular open source language models, along with a\ncomprehensive analysis of the trade-offs involved in adopting this approach.", + "authors": "Aonan Zhang, Chong Wang, Yi Wang, Xuanyu Zhang, Yunfei Cheng", + "published": "2024-03-14", + "updated": "2024-03-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.02082v1", + "title": "GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding", + "abstract": "Speculative decoding is a relatively new decoding framework that leverages\nsmall and efficient draft models to reduce the latency of LLMs. In this study,\nwe introduce GliDe and CaPE, two low-hassle modifications to vanilla\nspeculative decoding to further improve the decoding speed of a frozen LLM.\nSpecifically, GliDe is a modified draft model architecture that reuses the\ncached keys and values from the target LLM, while CaPE is a proposal expansion\nmethod that uses the draft model's confidence scores to help select additional\ncandidate tokens for verification. Extensive experiments on different\nbenchmarks demonstrate that our proposed GliDe draft model significantly\nreduces the expected decoding latency. Additional evaluation using walltime\nreveals that GliDe can accelerate Vicuna models up to 2.17x and further extend\nthe improvement to 2.61x with CaPE. We will release our code, data, and the\ntrained draft models.", + "authors": "Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You", + "published": "2024-02-03", + "updated": "2024-02-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.04487v1", + "title": "Inference with Reference: Lossless Acceleration of Large Language Models", + "abstract": "We propose LLMA, an LLM accelerator to losslessly speed up Large Language\nModel (LLM) inference with references. LLMA is motivated by the observation\nthat there are abundant identical text spans between the decoding result by an\nLLM and the reference that is available in many real world scenarios (e.g.,\nretrieved documents). LLMA first selects a text span from the reference and\ncopies its tokens to the decoder and then efficiently checks the tokens'\nappropriateness as the decoding result in parallel within one decoding step.\nThe improved computational parallelism allows LLMA to achieve over 2x speed-up\nfor LLMs with identical generation results as greedy decoding in many practical\ngeneration scenarios where significant overlap between in-context reference and\noutputs exists (e.g., search engines and multi-turn conversations).", + "authors": "Nan Yang, Tao Ge, Liang Wang, Binxing Jiao, Daxin Jiang, Linjun Yang, Rangan Majumder, Furu Wei", + "published": "2023-04-10", + "updated": "2023-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.08168v1", + "title": "Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding", + "abstract": "We present a novel inference scheme, self-speculative decoding, for\naccelerating Large Language Models (LLMs) without the need for an auxiliary\nmodel. This approach is characterized by a two-stage process: drafting and\nverification. The drafting stage generates draft tokens at a slightly lower\nquality but more quickly, which is achieved by selectively skipping certain\nintermediate layers during drafting Subsequently, the verification stage\nemploys the original LLM to validate those draft output tokens in one forward\npass. This process ensures the final output remains identical to that produced\nby the unaltered LLM, thereby maintaining output quality. The proposed method\nrequires no additional neural network training and no extra memory footprint,\nmaking it a plug-and-play and cost-effective solution for inference\nacceleration. Benchmarks with LLaMA-2 and its fine-tuned models demonstrated a\nspeedup up to 1.73$\\times$.", + "authors": "Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, Sharad Mehrotra", + "published": "2023-09-15", + "updated": "2023-09-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.07863v4", + "title": "Speculative Decoding with Big Little Decoder", + "abstract": "The recent emergence of Large Language Models based on the Transformer\narchitecture has enabled dramatic advancements in the field of Natural Language\nProcessing. However, these models have long inference latency, which limits\ntheir deployment and makes them prohibitively expensive for various real-time\napplications. The inference latency is further exacerbated by autoregressive\ngenerative tasks, as models need to run iteratively to generate tokens\nsequentially without leveraging token-level parallelization. To address this,\nwe propose Big Little Decoder (BiLD), a framework that can improve inference\nefficiency and latency for a wide range of text generation applications. The\nBiLD framework contains two models with different sizes that collaboratively\ngenerate text. The small model runs autoregressively to generate text with a\nlow inference cost, and the large model is only invoked occasionally to refine\nthe small model's inaccurate predictions in a non-autoregressive manner. To\ncoordinate the small and large models, BiLD introduces two simple yet effective\npolicies: (1) the fallback policy that determines when to hand control over to\nthe large model; and (2) the rollback policy that determines when the large\nmodel needs to correct the small model's inaccurate predictions. To evaluate\nour framework across different tasks and models, we apply BiLD to various text\ngeneration scenarios encompassing machine translation on IWSLT 2017 De-En and\nWMT 2014 De-En, and summarization on XSUM and CNN/DailyMail. On an NVIDIA T4\nGPU, our framework achieves a speedup of up to 2.12x speedup with minimal\ngeneration quality degradation. Furthermore, our framework is fully\nplug-and-play and can be applied without any modifications in the training\nprocess or model architecture. Our code is open-sourced", + "authors": "Sehoon Kim, Karttikeya Mangalam, Suhong Moon, Jitendra Malik, Michael W. Mahoney, Amir Gholami, Kurt Keutzer", + "published": "2023-02-15", + "updated": "2023-10-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.09781v4", + "title": "SpecInfer: Accelerating Generative Large Language Model Serving with Tree-based Speculative Inference and Verification", + "abstract": "This paper introduces SpecInfer, a system that accelerates generative large\nlanguage model (LLM) serving with tree-based speculative inference and\nverification. The key idea behind SpecInfer is leveraging small speculative\nmodels to predict the LLM's outputs; the predictions are organized as a token\ntree, whose nodes each represent a candidate token sequence. The correctness of\nall candidate token sequences represented by a token tree is verified against\nthe LLM in parallel using a novel tree-based parallel decoding mechanism.\nSpecInfer uses an LLM as a token tree verifier instead of an incremental\ndecoder, which significantly reduces the end-to-end latency and computational\nrequirement for serving generative LLMs while provably preserving model\nquality. Our evaluation shows that SpecInfer outperforms existing LLM serving\nsystems by 1.5-2.8x for distributed LLM inference and by 2.6-3.5x for\noffloading-based LLM inference, while preserving the same generative\nperformance. SpecInfer is publicly available at\nhttps://github.com/flexflow/FlexFlow/", + "authors": "Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, Chunan Shi, Zhuoming Chen, Daiyaan Arfeen, Reyna Abhyankar, Zhihao Jia", + "published": "2023-05-16", + "updated": "2024-04-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.DC", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.08252v2", + "title": "REST: Retrieval-Based Speculative Decoding", + "abstract": "We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm\ndesigned to speed up language model generation. The key insight driving the\ndevelopment of REST is the observation that the process of text generation\noften includes certain common phases and patterns. Unlike previous methods that\nrely on a draft language model for speculative decoding, REST harnesses the\npower of retrieval to generate draft tokens. This method draws from the\nreservoir of existing knowledge, retrieving and employing relevant tokens based\non the current context. Its plug-and-play nature allows for seamless\nintegration and acceleration of any language models, all without necessitating\nadditional training. When benchmarked on 7B and 13B language models in a\nsingle-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on\ncode or text generation. The code of REST is available at\nhttps://github.com/FasterDecoding/REST.", + "authors": "Zhenyu He, Zexuan Zhong, Tianle Cai, Jason D. Lee, Di He", + "published": "2023-11-14", + "updated": "2024-04-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.11462v4", + "title": "Cascade Speculative Drafting for Even Faster LLM Inference", + "abstract": "Introduced to enhance the efficiency of large language model (LLM) inference,\nspeculative decoding operates by having a smaller model generate a draft. A\nlarger target model then reviews this draft to align with its output, and any\nacceptance by the target model results in a reduction of the number of the\ntarget model runs, ultimately improving efficiency. However, the drafting\nprocess in speculative decoding includes slow autoregressive generation and\nallocates equal time to generating tokens, irrespective of their importance.\nThese inefficiencies collectively contribute to the suboptimal performance of\nspeculative decoding. To further improve LLM inference, we introduce Cascade\nSpeculative Drafting (CS Drafting), a speculative execution algorithm that\nincorporates two types of cascades. The Vertical Cascade eliminates\nautoregressive generation from neural models, while the Horizontal Cascade\noptimizes time allocation in drafting for improved efficiency. Combining both\ncascades, CS Drafting achieves up to an 81 percent additional speedup over\nspeculative decoding in our experiments, while maintaining the same output\ndistribution as the target model. Our code is publicly available at\nhttps://github.com/lfsszd/CS-Drafting.", + "authors": "Ziyi Chen, Xiaocong Yang, Jiacheng Lin, Chenkai Sun, Kevin Chen-Chuan Chang, Jie Huang", + "published": "2023-12-18", + "updated": "2024-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.11131v1", + "title": "Speculative Streaming: Fast LLM Inference without Auxiliary Models", + "abstract": "Speculative decoding is a prominent technique to speed up the inference of a\nlarge target language model based on predictions of an auxiliary draft model.\nWhile effective, in application-specific settings, it often involves\nfine-tuning both draft and target models to achieve high acceptance rates. As\nthe number of downstream tasks grows, these draft models add significant\ncomplexity to inference systems. We propose Speculative Streaming, a\nsingle-model speculative decoding method that fuses drafting into the target\nmodel by changing the fine-tuning objective from next token prediction to\nfuture n-gram prediction. Speculative Streaming speeds up decoding by 1.8 -\n3.1X in a diverse set of tasks, such as Summarization, Structured Queries, and\nMeaning Representation, without sacrificing generation quality. Additionally,\nSpeculative Streaming is parameter-efficient. It achieves on-par/higher\nspeed-ups than Medusa-style architectures while using ~10000X fewer extra\nparameters, making it well-suited for resource-constrained devices.", + "authors": "Nikhil Bhendawade, Irina Belousova, Qichen Fu, Henry Mason, Mohammad Rastegari, Mahyar Najibi", + "published": "2024-02-16", + "updated": "2024-02-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + } +] \ No newline at end of file