diff --git "a/related_53K/test_related_long_2404.17140v1.json" "b/related_53K/test_related_long_2404.17140v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.17140v1.json" @@ -0,0 +1,8617 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17140v1", + "title": "Small Language Models Need Strong Verifiers to Self-Correct Reasoning", + "abstract": "Self-correction has emerged as a promising solution to boost the reasoning\nperformance of large language models (LLMs), where LLMs refine their solutions\nusing self-generated critiques that pinpoint the errors. This work explores\nwhether smaller-size (<= 13B) language models (LMs) have the ability of\nself-correction on reasoning tasks with minimal inputs from stronger LMs. We\npropose a novel pipeline that prompts smaller LMs to collect self-correction\ndata that supports the training of self-refinement abilities. First, we\nleverage correct solutions to guide the model in critiquing their incorrect\nresponses. Second, the generated critiques, after filtering, are used for\nsupervised fine-tuning of the self-correcting reasoner through solution\nrefinement. Our experimental results show improved self-correction abilities of\ntwo models on five datasets spanning math and commonsense reasoning, with\nnotable performance gains when paired with a strong GPT-4-based verifier,\nthough limitations are identified when using a weak self-verifier for\ndetermining when to correct.", + "authors": "Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, Lu Wang", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Training Small Language Model to Self-Correct. Recent work shows that smaller language model can be fine-tuned on task-specific data to perform self-correct. But existing methods rely either on distilled data from stronger models (An et al., 2023; Yu et al., 2023b; Han et al., 2024; Ye et al., 2023; Zhang et al., 2024) or template-based critiques (Paul et al., 2023; Welleck et al., 2023). Our approach differs from prior studies in this domain as we gather natural language critiques from a small language model without relying on larger models or task-specific heuristics. Furthermore, we split the self-correction process into two phases: (SELF-)VERIFY and SELF-REFINE. This separation contrasts with earlier approaches that often merge the two skills, which not only obscures the true abilities of these models in each respective skill but also complicates the training process. In a nutshell, we demonstrate that strong verifiers unleash the power of small LMs to SELF-REFINE. Bootstrapping Reasoning in Language Models. As language models become more powerful, human supervision may not be sufficient to improve these models. This trend calls for selfimproving LMs that provide training signals for themselves (Zelikman et al., 2022; G\u00fcl\u00e7ehre et al., 2023; Yuan et al., 2023; Wang et al., 2023; Chen et al., 2024). The bootstrapping methods often involve iteratively fine-tuning a base LM on its selfgenerated examples that obtain a high reward value for correctness, helpfulness, or other desired properties. The bootstrapping process can further leverage label-free data (Huang et al., 2023a; Li et al., 2023a; Yuan et al., 2024) by generating pseudo labels using LLMs themselves. We draw inspiration from this family of methods and bootstrap the selfcorrection ability of smaller LMs. Our method is complementary to the rejection-sampling finetuning approach and can further improve reasoning performance upon that. Verifying Reasoning. Verification of reasoning chains involves judging the correctness of the final answer and each reasoning steps. The verifier is often used to rerank multiple over-generated solutions and select the best one as the final output (Cobbe et al., 2021), or guide the LLM decoding through the search space for correct reasoning paths (Khalifa et al., 2023). We leverage a verifier to determine when to self-correct. Verifiers come at different granularity, including process/step-based (Uesato et al., 2022b; Li et al., 2023b; Lightman et al., 2023) and outcome-based supervision (Cobbe et al., 2021; Yu et al., 2023a; Hosseini et al., 2024). We use the latter since it is easier to construct labels automatically. Besides training a verifier with supervision signals, LLMs can also be few-shot prompted to become a verifier (Weng et al., 2023; Madaan et al., 2023; Zhou et al., 2023; Asai et al., 2023). We also explores the possibility of LLM-as-verifier, and demonstrate its usage for self-correction. We highlight the importance of verification in the context of selfcorrection, which echoes the recent finding that LLMs can successfully solve a problem, but cannot verify the reasoning (Gu et al., 2024; Oh et al., 2024; West et al., 2023). This calls for more effort for building evaluation benchmarks (Jacovi et al., 2024; Chen et al., 2023a; Mao et al., 2024; Lightman et al., 2023) and developing methods (Nguyen et al., 2024; Xu et al., 2024; Hosseini et al., 2024) to improve the reasoning verification.", + "pre_questions": [], + "main_content": "Introduction Recent research shows that large language models (LLMs) (OpenAI, 2023) can self-correct their responses to meet diverse user requirements, ranging from diminishing harmful content to including specific keywords and to debugging code (Madaan et al., 2023; Chen et al., 2023b). Self-correction is typically accomplished by first generating a critique that identifies the shortcomings of the initial response, followed by revising it according to the self-critique\u2014a process that can be iterated. Self-correction has emerged as an intriguing paradigm for rectifying the flaws in LLM\u2019s outputs (Pan et al., 2023). However, models that * Correspondence to yunxiang@umich.edu 1Our implementation can be accessed at https://github. com/yunx-z/SCORE are effective at self-correction are of very large sizes, and many of them are proprietary and accessible only via APIs. In this work, we focus on the self-correction abilities of small, open-source LMs.2 Previous studies have shown that these smaller models can learn self-correction in reasoning through distillation from stronger LMs (Yu et al., 2023b; An et al., 2023; Han et al., 2024). Yet this poses security risks for high-stakes domains and hinders the scientific understanding of enhancing LMs\u2019 ability to correct errors. We thus ask the question: To which degree do small LMs require guidance from strong LMs to learn self-correction for reasoning? We study this question by leveraging the small model itself to generate supervised training data to enhance its self-correction ability, instead of resorting to stronger LMs. To this end, we draw inspiration from the rejection sampling fine-tuning (RFT) (Touvron et al., 2023; Yuan et al., 2023) method where LLM\u2019s reasoning skills are bootstrapped via diverse chain-of-thought sampling and supervised fine-tuning on the correct reasoning chains. We propose SCORE\u2014an approach to bootstrap small LMs\u2019 Self-COrrection ability in REasoning tasks. Concretely, we devise a pipeline for accumulating high-quality critique-correction data from small LMs, which are used for supervised fine-tuning of self-correcting reasoners. First, we leverage correct solutions as hints for the base LMs to critique incorrect answers. By reverseengineering from the correct answer, the models generate more effective critiques. Second, we filter these critiques for correctness, well-formedness, and clarity using simple rule-based and prompting methods. Finally, we fine-tune the same LMs to be2While the distinction between small vs. large language models is often context-dependent (Saunders et al., 2022; Yu et al., 2023b), in this work, we interchangeably use \u201csmall\u201d or \u201cweak\u201d LMs to refer to open models with a few billion parameters (e.g., LLaMA-7/13B (Touvron et al., 2023)). 1 arXiv:2404.17140v1 [cs.CL] 26 Apr 2024 come self-refining models using this curated data. By avoiding the use of supervision from stronger LMs, we ensure that our method enables a small LM to bootstrap its self-correction capabilities. We evaluate our SCORE fine-tuned refiner under both extrinsic and intrinsic self-correction settings (Huang et al., 2023b). The primary difference between these two settings is whether the refiner is allowed to use external signals to determine when to self-correct (i.e., refine the initial solution only when it is believed to be incorrect). Identifying when to self-correct involves verifying the solutions\u2019 correctness, which is still challenging for current state-of-the-art LLMs without proper external feedback (Huang et al., 2023b). We adopt a simple baseline for the self-verification problem following Cobbe et al. (2021). Specifically, we fine-tune the same LMs to become verifiers with labels based solely on the correctness of the final answer, conditioning on the question and a candidate solution. As for the extrinsic setting, we simulate strong verifiers with GPT-4 and oracle labels, to show the effectiveness of small LMs as self-correcting reasoners. We test the SCORE method with the LLaMA-2-13B-chat (Touvron et al., 2023) and Gemma-7b-it (Team et al., 2024) models on five datasets spanning math and commonsense reasoning. We find that our model with SCORE finetuning outperforms the original model by an average of 14.6% when using a gpt-4-based verifier. Nevertheless, the model struggles with selfcorrection when subjected to a weak self-verifier fine-tuned on self-generated solutions. Our main contributions are summarized below: 1. We introduce SCORE, a novel pipeline to generate self-correction data from a small LM, and subsequently fine-tune the model to be a self-correcting reasoner. 2. Our method effectively augments the selfcorrection abilities of small LMs on math and commonsense reasoning, when using strong verifiers. 3. To the best of our knowledge, we are the first to demonstrate the potential of small LMs to bootstrap their abilities on self-corrective reasoning without distilling training data from stronger LMs or using human annotation. 2 Problem Formulation of Self-Correction Self-Correct := (SELF-)VERIFY + SELFREFINE. We decompose the task of selfcorrection into two phases: (SELF-)VERIFY and SELF-REFINE. The LM first generates an initial solution for a reasoning question. A verifier, either the LM itself (intrinsic) or the external signal (extrinsic), then judges the correctness of the initial solution. If correct, the initial solution will be directly used as the final answer. If incorrect, a refiner will revise the solution. While this process can be iterated, we fix the times of iterations as 1 throughout this paper for efficiency and leave multiple iterations as future studies. Decoupling (SELF-)VERIFY and SELF-REFINE brings two major advantages over a one-modeldoes-all design. First, we can freely parameterize each module\u2014for example, by using a fine-tuned and a few-shot prompted model. This allows us to carefully examine the impact of strong vs. weak verifiers on the refiners\u2019 performance. On the contrary, previous work on self-correction with small LMs (Yu et al., 2023b; An et al., 2023; Han et al., 2024) conflates SELF-VERIFY and SELF-REFINE, creating a barrier to fully understanding the distinct capacities of these models in each skill. Second, it reduces the difficulty of training each module, since the model only needs to specialize in one kind of ability, which is either verification or refinement. SELF-REFINE := Critique + Correction. The challenge for SELF-REFINE is that it can be difficult for language models to directly map an initial solution to a revision without any guidance (Welleck et al., 2023). Using critiques\u2014 assessments that pinpoint the locations of errors within the reasoning steps, explain the causes of these errors, and offer guidance on how to correct them\u2014can significantly enhance the performance of language models when generating revisions (Saunders et al., 2022; Madaan et al., 2023). Therefore, we formulate refinement with two steps: the model will first generate a critique for the initial solutions determined as incorrect, followed by a corrected version, in a single pass. Yet, it is still non-trivial to obtain high-quality critiques to guide the error correction. We address this problem using the correct solutions as hints to facilitate the critique generation, detailed in Section 3.1. 2 \ud835\udc44! \ud835\udc46! \ud835\udc4c ! Base LM \ud835\udc44! CoT Prompt \ud835\udc46! \" \ud835\udc46! # \ud835\udc46! $ Sample and label \ud835\udc41solutions per Q \ud835\udc4c ! \" \ud835\udc4c ! # \ud835\udc4c ! $ \u2026 \u2705 \u2705 \u274e Refiner \ud835\udc44! \ud835\udc46! % \u274e \ud835\udc36! \",$ \ud835\udc46! %& \u2705 Supervised Finetuning on critique-correction data \ud835\udc36! questions solutions labels critiques masks (do not contribute to loss) Base LM \ud835\udc44! Correction Prompt \ud835\udc46! % \u274e \ud835\udc36! \",$ \ud835\udc46! %& \ud835\udc4c ! %& Check whether base LM can recover the correct solution with filtered critique \u2705 \u2753 a. b. c. d. Base LM \ud835\udc44! Critique Prompt \ud835\udc46! % \ud835\udc46! 4 \u274e \u2705 \ud835\udc36! \",$ Generate critique for incorrect solutions using correct solutions as hints Figure 1: A diagram of SCORE pipeline to generate critique-correction data from a small LM (step a-c) and fine-tune the same LM to self-correct its reasoning errors (step d), without distilling any data from stronger LMs. 3 The SCORE Method Our approach is inspired by rejection sampling finetuning (RFT): sampling diverse solutions for each question and fine-tune LLMs on the self-generated solutions that lead to the correct final answer (Yuan et al., 2023; Huang et al., 2023a; Zelikman et al., 2022). We want to bootstrap the small LM\u2019s inherent ability to generate critiques for reasoning steps. We design an end-to-end pipeline to collect selfcorrection data generated by small LMs at scale, without any distillation from stronger LMs. The self-generated critiques, after filtering, are used to fine-tune the smaller LM itself to bootstrap its ability to self-correct. Concretely, the SCORE pipeline consists of two stages shown in Figure 1 and described below. Stage 1: Generating and Filtering Critiques. We sample N solutions for each question in the training set by few-shot chain-of-thought prompting a base LM (step a). To enable the base LM to reflect on its incorrect solutions, we include a correct solution for the same question (if exists) in the prompt as a hint (step b). We then filter the self-generated critiques based on their correctness and clarity (step c). This process is detailed in Section 3.1. Stage 2: Supervised Fine-tuning of the Refiner. The filtered critiques obtained from stage 1 are used in the next stage for fine-tuning the small LM itself as if it came up with them without any hints. We train a refiner that generates critiques and corrections conditioned on questions and initial solutions (step d). More details are given in Section 3.2. 3.1 Generating and Filtering Critiques Directly generating critique for an incorrect solution without external supervision signals is difficult. In our preliminary experiments, we find it easier for the LM to generate critiques using correct solutions as hints, as the model only needs to compare the different steps between these two solutions and justify the correct ones. In Appendix B, we explain this intuition from a mathematical perspective. To leverage correct solutions as hints for LMs to generate critiques on incorrect solutions, we label these solutions and collect all possible pairs of incorrect-correct solutions for the same questions (Cartesian product between the sets of incorrect and correct solutions). We craft a few-shot critique prompt (Appendix A) to instruct the base LM to generate critiques for the incorrect solution using the paired correct solution as hints.3 Step-level critiques are more useful than solution-level ones since they provide more precise and fine-grained supervision (Lightman et al., 2023; Wu et al., 2023; Uesato et al., 2022a) that mitigate the undesirable behavior of LMs using incorrect reasoning to reach the correct final answer (Khalifa et al., 2023; Zelikman et al., 2022). Therefore, we prompt the model to provide feedback for each step of the initial solution, either endorsing the initial answer (e.g., \u201cthis step is correct\u201d) or pinpointing the errors (e.g., \u201cthere are errors in the step because ...\u201d). To ensure the LM-generated critique is grounded in a specific step, we also ask the model to copy each step before providing feedback on it. Considering 3The total number of incorrect-correct solution pairs could be very large so we sample only one critique per pair. This has already provided a sufficient amount of SCORE fine-tuning data after filtering. 3 these requirements, we design the format of the critique prompt as follows, with a detailed example in Appendix A. Critique Prompt Q: {question} Answer 1 (Incorrect): Step 1: ... ... Step n: The answer is x . Answer 2 (Correct): Step 1: ... ... Step n: The answer is y . There are reasoning errors in Answer 1. Please go through each step in Answer 1, use Answer 2 as a reference for the correct approach, and provide feedback that helps correct the errors in Answer 1. End your response with [END]. Let\u2019s go through the errors in Answer 1 and provide feedback: Answer 1 (Incorrect): Step 1: ... Feedback: This step is correct. ... Step i: ... Feedback: This is incorrect. Because ... ... Step n: The answer is x . Feedback: The correct answer, based on the corrected calculations, should be y . [END] Note that the model should suggest the corrected final answer, taken from the hint solution, as part of the feedback for the last step. This forces the model to explicitly leverage the information from the hint solution. Filtering Generated Critiques. After obtaining the raw self-generated critiques, we want to remove the low-quality ones and keep the rest for fine-tuning LMs. Thanks to the well-designed format of critiques, we can apply rule-based filters to remove generated critiques that do not follow the desired format. These criteria include: \u2022 The number of steps and feedbacks (counted by the appearances of \u201cStep {i}:\u201d and \u201cFeedback:\u201d) should be the same. \u2022 Each step should be exactly copied from the initial solution. \u2022 The feedback for the last step should provide the correct answer. The first two criteria check for the well-formedness of the critique and the third one focuses on the correctness aspect. A critique will be removed if it fails to meet any of the three criteria above. Given that a critique could still contain errors even if it suggests the correct final answer in the last step, we add an additional stage of promptingbased filtering besides the above rule-based heuristics. Specifically, we prompt the base LM to revise the incorrect solution given the critique that already passes the aforementioned filtering rule. Assuming the base LM has reasonable ability of following instructions, it is expected to give a correct revision if the generated critique is both clear and error-free. We demonstrate such an example of the correction prompt in Appendix A. In other words, we remove critiques that do not result in a correctly revised answer. After the ruled-based and prompting-based filtering, we obtain the high-quality critiques for fine-tuning LMs to self-refine. 3.2 Supervised Fine-tuning of the Refiner We train the refiner to generate a critique and an improved solution in one pass conditioned on a question and an initial solution. We note that although we provide the correct solutions as hints to generate critiques during data collection, the model is tasked to generate critiques without the hints during fine-tuning and inference. Previously we collected critiques for every step to ensure that we can apply multiple filters to obtain high-quality critiques. But in this step, we truncate the critiques to only keep the feedback for the first error step as the fine-tuning target. This is because it is difficult to ask the LMs to identify and correct all the errors in one pass (Yu et al., 2023b) without referring to correct solutions as hints during inference. The refiner is fine-tuned on truncated critiques and corrections collected in the previous stages with cross-entropy loss. We do not include few-shot demonstrations during fine-tuning. We apply masks on the input tokens so that they do not contribute to the loss. Although we only do 1 iteration for the refinement in this work, we later show that small LMs can already achieve great improvement after 1 round of self-correction when paired with a strong verifier. As for multiple iterations, it should be able to solve multiple errors within one solution, which we leave as future work. 4 GSM8K CSQA # % # % Base LM: LLaMA-2-13b-chat Raw critiques 56,843 100.0 42,705 100.0 After rule-based filering 36,337 63.9 36,436 85.3 After prompting filtering (for SCORE fine-tuning) 14,499 25.5 24,511 57.4 Base LM: Gemma-7b-it Raw critiques 52,669 100.0 40,604 100.0 After rule-based filering 17,209 32.7 35,929 88.5 After prompting filtering (for SCORE fine-tuning) 4,623 8.8 12,972 31.9 Table 1: Statistics of the critique data generated from our SCORE pipeline. Although Gemma-7B has fewer data left after filtering, it still achieves greater improvement than LLaMA-13B by self-correction (Section 5.1), suggesting that Gemma-7B is more effective at learning self-correction from SCORE. 4 Experimental Setup Self-Correction Data Collection. As stated in Section 3.1, we sample N = 10 solutions from the base model with the chain-of-thought (CoT) prompts shown in Appendix A, label their correctness, and formulate incorrect-correct solution pairs for critique generation. We separately collect data for each base LM and task. This results in the number of raw critiques shown in Table 1. We sequentially apply rule-based filtering and promptingbased filtering to obtain high-quality critiques for the fine-tuning data. Verifiers. We experiment with verifiers of different level capabilities to gauge their impacts on self-correction performance. First, we adopt a simple baseline for training a self-verifier following Cobbe et al. (2021). The self-verifier is a model with the same architecture as the base LM, conditioned on the question and a candidate solution to judge the probability that the solution is correct/incorrect. Specifically, we label the solutions sampled from the base LM as incorrect (0) or correct (1) solely based on their final answers and finetune the verifier with a binary classification head on the last-layer representation of the last token in the input sequence \u201cQuestion: {q} \\n Solution: {s} \\n Is this solution correct?\u201d. Since the fine-tuning data is imbalanced between correct and incorrect solution, we re-weight the loss for each class with regarding to its proportion. During inference, the verifier model outputs a probability of the initial solution being incorrect, and the refinement is introduced only when the confidence of the verifier\u2019s predictions exceeds a certain threshold, which is automatically chosen in a way that maximizes the accuracy on the dev set and then fixed during test-time predictions. Since fine-tuned small LMs are still weak verifiers that bottleneck the performance of self-correction, we also experiment with a second option by using gpt-4 as an off-the-shelf strong verifier to demonstrate the potential of our fine-tuned refiner. We do so by few-shot prompting gpt-4 to predict the correctness of the initial solution by smaller LMs, with the verifying prompt shown in Appendix A. Finally, we directly use the gold labels of the initial solutions as signals to determine when to self-refine. This oracle verifier setting provides an upper bound for the refiners\u2019 performance. Benchmarks and Base Models. To demonstrate the effectiveness of the SCORE method, we conduct experiments on two popular datasets: GSM8K (Cobbe et al., 2021) for mathematical reasoning and CommonsenseQA (Talmor et al., 2018) for commonsense reasoning. We also conduct transferability studies and evaluate the generalization performance of our fine-tuned refiner on MATH (Hendrycks et al., 2021) for mathematical reasoning, QASC (Khot et al., 2020) and RiddleSense (Lin et al., 2021) for commonsense reasoning. Specifically, for mathematical reasoning, we train self-verifiers and SCORE refiners using only GSM8K training data and evaluate them on the whole GSM8K test set and a subset of MATH test set,4 following the practice of Hosseini et al. (2024). Similarly, for commonsense reasoning, we fine-tune our models using only CommonsenseQA training data and evaluate them on the whole dev5 set of CommonsenseQA, QASC, and RiddleSense. Since questions in CommonsenseQA, QASC, and Riddlesense have a multiple-choice format, we also include a random refiner baseline that randomly picks a choice different from the initial answer, following the practice of Huang et al. (2023b). We explore two open-source smaller language models, namely LLaMA-2-13B-chat (Touvron et al., 2023) and Gemma-7B-it (Team et al., 2024) 4This subset includes a total 181 problems of Level 1 difficulty in MATH with question types of algebra, Counting & probability, prealgebra and number theory, where the final answer is a number and no latex exists in the question. 5The test labels of these datasets are hidden, so we use the original dev set as our test set, following Kojima et al. (2022); Kim et al. (2023). 5 Verifier Refiner GSM8K GSM8K \u2192 MATH Subset CSQA CSQA \u2192 QASC CSQA \u2192 RiddleSense V. F1 ACC V. F1 ACC V. F1 ACC V. F1 ACC V. F1 ACC Base LM: LLaMA-2-13B-chat Initial answers by few-shot prompting 37.2 23.8 69.7 65.9 57.6 prompted prompted 31.9 36.9 -0.3 20.0 23.8 +0.0 52.5 62.2 -7.5 53.1 63.3 -2.6 51.9 55.2 -2.4 oracle 100.0 39.7 +2.5 100.0 26.5 +2.7 100.0 83.7 +14.0 100.0 76.1 +10.2 100.0 72.7 +15.1 self SCORE (fine-tuned) 51.1 37.5 +0.3 36.0 23.8 +0.0 47.1 69.7 +0.0 45.8 65.6 -0.3 42.1 57.5 -0.1 gpt-4 88.4 41.4 +4.2 82.9 26.5 +2.7 80.3 72.4 +2.7 85.9 68.1 +2.2 87.7 67.3 +9.7 oracle 100.0 46.0 +8.8 100.0 31.5 +7.7 100.0 86.2 +16.5 100.0 78.0 +12.1 100.0 76.4 +18.8 Base LM: Gemma-7B-it Initial answers by few-shot prompting 36.3 27.1 67.2 65.0 57.1 prompted prompted 45.3 36.3 +0.0 39.2 28.2 +1.1 55.7 65.4 -1.8 61.0 65.1 +0.1 58.2 55.3 -1.8 oracle 100.0 37.7 +1.4 100.0 29.3 +2.2 100.0 74.5 +7.3 100.0 71.3 +6.3 100.0 62.3 +5.2 self SCORE (fine-tuned) 56.8 36.7 +0.4 49.5 27.1 +0.0 42.0 67.5 +0.3 41.0 65.2 +0.2 38.1 56.8 -0.3 gpt-4 89.5 42.5 +6.2 82.8 39.2 +12.1 82.7 75.0 +7.8 89.9 72.8 +7.8 85.6 64.9 +7.8 oracle 100.0 47.4 +11.1 100.0 44.2 +17.1 100.0 85.4 +18.2 100.0 77.3 +12.3 100.0 72.7 +15.6 Table 2: Performance of SCORE models using LLaMA-2-13B-chat and Gemma-7B-it as base LM. We report F1 score of the verifiers (V. F1) and final answer accuracy (ACC). We include test results for training tasks (GSM8K and CommonsenseQA/CSQA), as well as transfer evaluation of GSM8K trained models on MATH subset, CSQA trained models on QASC and RiddleSense. All models use greedy decoding. We highlight the best-performing system per model without using an oracle verifier. On each dataset, the superior model among the highlighted ones is indicated in bold. as the base LMs to generate self-correction data and evaluate their self-correction abilities. In Appendix C, we also investigate whether our selfcorrection fine-tuning can be built on top of other fine-tuning methods (e.g., rejection-sampling finetuning) to further boost the reasoning performance. Fine-tuning and Evaluation. We fine-tune the base LM using the LLaMA-Factory library (Zheng et al., 2024b) with LoRA (Hu et al., 2022). We set the low-rank dimension as 32, the learning rate as 2e-5, training epochs as 3, batch size as 32. During inference, we set the temperature as 0 (i.e., greedy decoding) and the max sample length as 2,048. All our experiments can be conducted on 4 x A40 GPU with 48GB of memory. 5 Results In this section, we first present the experimental findings of SCORE method on various models and datasets (Section 5.1). To better understand the performance changes after self-correction, we then analyze the behaviors of verifiers and refiners (Section 5.2) and further highlight several key design decisions of our pipeline with ablation studies (Section 5.3). Lastly, we show the impact of SCORE fine-tuning data size on self-correction performance (Section 5.4). 5.1 Main Findings Table 2 presents the primary evaluation results for our fine-tuned models compared to baseline models. The results include two performance metrics: the verifier F1 (\u201cV. F1\u201d), which assesses the precision and recall of the verifier\u2019s predictions; and the final accuracy (\u201cACC\u201d), which measures the accuracy of the final answer after self-correction. We have four major findings. 1) The critique-correction data collected by our SCORE pipeline enhances the base LM\u2019s capability for self-correction. Our fine-tuned models consistently bring large improvements on the final accuracy over the initial answer obtained by few-shot prompting. However, the prompting-based selfcorrection baseline (prompted verifier + prompted refiner in Table 2) proposed by Madaan et al. (2023) deteriorates the final predictions, as LMs struggle to identify errors in their reasoning (Huang et al., 2023b) and possess limited self-correction abilities before bootstrapping. On the multiple-choice CommonsenseQA questions, our SCORE fine-tuned refiner achieves much larger improvement than the 6 0 1 True label 229 599 24 467 verifier (self) 810 18 15 476 verifier (self) + refiner (SCORE) 0 1 Predicted label 0 1 True label 760 68 75 416 verifier (gpt-4) 0 1 Predicted label 721 107 52 439 verifier (gpt-4) + refiner (SCORE) Figure 2: Confusion matrices of the predictions by the verifier and the refiner on GSM8K test set. The base LM is LLaMA-2-13B-chat. \u201cTrue label\u201d means the correctness of the initial solution. The predicted label of the verifier represents whether the verifier judges it as correct (1) or incorrect (0). The predicted label of the verifier + refiner is the correctness of the final answer. The strong verifier (gpt-4) makes fewer false positive predictions than the weak self-verifier and unleashes the potential of the small LM to revise an incorrect answer into a correct one more likely than the other way around. random baseline under oracle verifier, indicating that our model is not simply making random guess. 2) Our framework improves self-correction for various base LMs on different types of reasoning tasks. We validate the effectiveness of our SCORE fine-tuning on both math reasoning and commonsense reasoning tasks with two pretrained LMs. In principle, our task-agonistic pipeline can be applied to a variety of datasets whose reasoning could be expressed in a step-by-step format. We also observe that although the initial solutions proposed by Gemma-7B are worse than LLaMA-13B (e.g., 67.2 < 69.7 on CommonsenseQA), Gemma-7B\u2019s accuracy surpasses LLaMA-13B after self-correction (e.g., 75.0 > 72.4 on CommonsenseQA). Considering that Gemma-7B is fine-tuned with even less self-correction data (Table 1), we believe Gemma is more effective at learning self-correction skills from SCORE fine-tuning. 3) The self-correction performance is largely bottlenecked by the verifier rather than the refiner. Using the same fine-tuned refiner, the final accuracies vary a lot among different verifiers. The upper bound performance suggested by an oracle veriVerifier Refiner GSM8K CSQA Freq. Contrib. Freq. Contrib. Base LM: LLaMA-2-13b-chat prompted prompted 3.7 10.2 17.5 19.6 self 2.7 2.9 1.2 40.0 oracle 62.8 4.0 30.3 46.2 self SCORE (fine-tuned) 19.0 10.8 3.0 33.3 gpt-4 63.3 15.6 38.8 40.9 oracle 62.8 14.0 30.3 54.3 Base LM: Gemma-7b-it prompted prompted 18.7 21.5 20.2 45.3 self 9.9 9.9 2.9 42.9 oracle 63.7 2.1 32.8 22.4 self SCORE (fine-tuned) 27.9 14.1 0.6 57.1 gpt-4 63.4 17.1 38.6 48.4 oracle 63.7 17.4 32.8 55.6 Table 3: Analysis of self-correction behaviors. The settings are the same as those in Table 2. Freq. (in percentage) means the frequency with which the verifier decides to self-correct. Contrib. (in percentage) refers to the extent to which these self-correction attempts enhance the model\u2019s task performance. To enhance the final accuracies in Table 2, the system should maintain a balanced frequency of self-correction, ideally similar to that of the oracle verifier. Additionally, the system should possess strong refinement capabilities, as indicated by a high contribution score. fier demonstrate great potential for self-correction, yet a weak self-verifier can only bring minor improvements, if not misguiding the refiner. Nevertheless, when combined with a more advanced verifier, such as GPT-4, our refiner achieves a significant increase in final accuracy, e.g., an average of +8.3 across five datasets for Gemma-7b-it. The confusion matrices in Figure 2 show the system of gpt-4-as-verifier + SCORE-refiner is more likely to modify an incorrect answer to a correct one than the other way round. This observation underscores the necessity of effectively tackling the problem of reasoning verification before significant advances in self-correction can be attained. Future work could focus on the improvement of reasoning verification that is built upon a mechanistic (Y\u00fcksekg\u00f6n\u00fcl et al., 2023; Yang et al., 2024) and representational (Zou et al., 2023; Zheng et al., 2024a) understanding of LMs\u2019 internal reasoning process. 4) The enhanced self-correction skills can transfer across different datasets. When evaluating our fine-tuned refiner on unseen datasets, it still demonstrates consistent improvement over the baselines (up to +12.1 by the GSM8K-trained Gemma-7B 7 Critique for only the first error step? Generating critique & correction in one pass? Final Accu. Oracle Accu. Initial answers by few-shot prompting 39.4 39.4 ! ! 40.2 49.5 % ! 39.6 46.4 ! % 39.4 43.6 % % 39.9 47.0 Table 4: Ablation study on the format of critique and decoupling critique and correction generation. Results are shown on GSM8K dev set with LLaMA-2-13B-chat as base LM. on MATH subset). This shows that the model is learning generalizable self-correction skills rather than overfitting to a specific dataset. Additionally, we find that the verifier does not transfer as well as the refiner, reiterating the difficulty of reasoning verification for LMs. 5.2 Analysis of Self-Correction Behaviors Following the methodology of Yu et al. (2023b), we focus on two key metrics to understand the model\u2019s self-correction behaviors: 1) the frequency with which the verifier decides to self-correct (Freq.), and 2) the extent to which these self-correction attempts enhance the model\u2019s task performance (Contrib.). Self-correction Freq. is measured by the ratio of self-correction attempts to the size of the test set, while self-correction Contrib. is determined by the number of instances in which these attempts successfully resulted in the correct answer. Table 3 presents a detailed analysis of the model\u2019s self-correction behaviors. Our analysis demonstrates that our fine-tuned refiner has a higher contribution to the final self-correction performance, explaining why it outperforms the prompting-based refiner (Madaan et al., 2023), as shown in Table 2. Additionally, we find that our fine-tuned verifier and the gpt-4 verifier maintain a more reasonable frequency of self-correction, striking a better balance between correction attempts and accuracy. 5.3 Ablation Studies In order to validate the various design decisions made in constructing our pipeline, we have conducted a series of ablation studies. The key findings from Table 4 can be summarized as follows. 1) It is easier for the LM to identify only the first erroneous 25 50 75 100 % of Refiner FT. Data Used 46 48 50 52 Final Accu. GSM8K verifier initial self gpt-4 oracle Figure 3: Final accuracy on GSM8K dev set w.r.t percentage of refiner fine-tuning data used and the type of verifier. The base LM is LLaMA-2-13B-chat. Under a strong verifier setting, our refiner brings greater performance gain with more fine-tuning data. Yet the performance plateaus with a weak verifier. step, as the performance drops if we challenge it to critique every step. 2) There is no need to separate the SELF-REFINE process into two modules\u2014one for generating critiques and another for corrections. Such a separation not only increases the system\u2019s complexity and delays inference but also leads to a diminished final accuracy. 5.4 Scaling with Fine-tuning Data Size We investigate the data-efficiency for refiner finetuning. Figure 3 plots the size of the refiner finetuning data against the final accuracy with different verification settings on GSM8K. We fine-tune the LLaMA-2-13B-chat base model on a random subset (varying from 25% to 75%) of the 14,499 total critique-corrections as previously shown in Table 1. We find that our refiner benefits from more fine-tuning data when paired with strong verifiers (oracle labels or gpt-4). Yet this effect is not observed when using a weak self-verifier, again highlighting the importance of verification for selfcorrection. We find that increasing the fine-tuning dataset size yields accuracy improvements up to a certain point; beyond approximately 10k examples (representing 75% of the SCORE fine-tuning data), the performance do not further improve. The optimal fine-tuning data size is likely task-dependent, and further research is needed to determine the data requirements in different contexts. 8 In this study, we investigate how to leverage minimal signals from strong LMs to teach small LMs to self-correct their reasoning. We propose the SCORE method to collect self-correction finetuning data solely from small LMs. We find that SCORE-fine-tuned small LMs become better refiner models without relying on knowledge distillation from stronger LMs, yet they still need strong verifiers to be successful at self-correcting their reasoning errors. Our results highlight that the self-verification limitation of LMs currently poses an obstacle to the advancement of intrinsic selfcorrection in reasoning and thus warrants future research. Limitations Generating large amounts of synthetic data from smaller LMs requires intensive GPU computations, yet it removes the reliance on proprietary API models. Comparing the cost-efficiency of these two approaches will help us better trade-off between data generated from smaller LMs and larger LMs. Introducing the verifier and refiner during inference also causes additional latency, which we discuss in Appendix E. 9", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2404.14527v1", + "title": "M\u00e9lange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity", + "abstract": "Large language models (LLMs) are increasingly integrated into many online\nservices. However, a major challenge in deploying LLMs is their high cost, due\nprimarily to the use of expensive GPU instances. To address this problem, we\nfind that the significant heterogeneity of GPU types presents an opportunity to\nincrease GPU cost efficiency and reduce deployment costs. The broad and growing\nmarket of GPUs creates a diverse option space with varying costs and hardware\nspecifications. Within this space, we show that there is not a linear\nrelationship between GPU cost and performance, and identify three key LLM\nservice characteristics that significantly affect which GPU type is the most\ncost effective: model request size, request rate, and latency service-level\nobjective (SLO). We then present M\\'elange, a framework for navigating the\ndiversity of GPUs and LLM service specifications to derive the most\ncost-efficient set of GPUs for a given LLM service. We frame the task of GPU\nselection as a cost-aware bin-packing problem, where GPUs are bins with a\ncapacity and cost, and items are request slices defined by a request size and\nrate. Upon solution, M\\'elange derives the minimal-cost GPU allocation that\nadheres to a configurable latency SLO. Our evaluations across both real-world\nand synthetic datasets demonstrate that M\\'elange can reduce deployment costs\nby up to 77% as compared to utilizing only a single GPU type, highlighting the\nimportance of making heterogeneity-aware GPU provisioning decisions for LLM\nserving. Our source code is publicly available at\nhttps://github.com/tyler-griggs/melange-release.", + "authors": "Tyler Griggs, Xiaoxuan Liu, Jiaxiang Yu, Doyoung Kim, Wei-Lin Chiang, Alvin Cheung, Ion Stoica", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "2.1 LLM Inference Optimization A significant body of research has focused on optimizing LLM inference. One stream concentrates on memory optimization, particularly through improved key-value cache reuse [54] and management strategies [21]. Another avenue seeks to minimize latency, such as scheduling optimization [51, 1, 47], speculative decoding [22], and kernel optimization [8, 42]. Additional optimizations include quantization [10, 23, 50] and sparsification [9]. Instead of altering inference logic, our work assumes a fixed inference engine configuration and concentrates on reducing LLM deployment costs by choosing cost-effective GPU instance types. 2.2 Machine Learning with Cloud Resources Recent studies have explored various strategies for reducing the cost of machine learning (ML) inference or training. Several focus on utilizing spot instances [43, 15, 52, 13] are complementary to our work. Other work targets deployment on heterogeneous resources [5, 6, 31, 28, 26], but focuses primarily on model training rather than serving. Leveraging serverless instances for inference cost reduction has been examined in [2]. Nonetheless, prior work predominantly concentrates on machine learning prior to the advent of LLMs, which we show to have unique characteristics that significantly impact cost efficiency. More recent studies, such as [27, 18], focus on LLMs, but they propose strategies for reducing costs via optimal migration plans and parallelism with heterogeneous resources. They do not identify the key LLM service characteristics that impact cost efficiency, which our work highlights. Another line of work [56, 38] explores splitting LLM inference into its two phases (prefill and decode) and performing the two phases on separate nodes, perhaps with different GPU types. Our work shows that, even within a phase, the best GPU type can change based on LLM service specifications.", + "pre_questions": [], + "main_content": "Introduction Large language models (LLMs) like GPT-4 [37] and the Llama model family [44, 45] are increasingly integrated into many online services, including search engines [39, 25], chatbots [36], and virtual assistants [29, 48, 49]. However, a significant obstacle in deploying LLM services is their high operational costs. The substantial size and computational demands of LLMs require the use of hardware accelerators, typically GPUs1, to achieve high-performance inference. Unfortunately, GPUs are expensive. For example, renting just a single on-demand NVIDIA A100 on a major cloud provider costs over $2, 600 per month, and many services require multiple A100s to serve especially large models or request volumes. Prior work [51, 54, 21] has introduced methods for increasing inference throughput to squeeze ever more performance out of expensive GPUs. However, less attention has been given to choosing the best GPU type(s) to use for a given LLM service. The broad and growing market of hardware accelerators, including NVIDIA GPUs [35], AMD GPUs [46], Google TPUs [20], CPUs [24], and more [4], creates a diverse option space with a wide range of hardware specifications and rental prices. Table 1 depicts the specs of just four NVIDIA GPUs, which already exhibits a large variety of costs and performance. Within this option space, we find that there is not a linear relationship between GPU cost and performance, which creates variations in GPU \u2217Equal contribution. 1For brevity, we use \u201caccelerator\u201d and \u201cGPU\u201d interchangeably in this work. 1 arXiv:2404.14527v1 [cs.DC] 22 Apr 2024 cost efficiency, defined based on common pricing models [36] as the sum of input and output tokens served per GPU dollar cost (T/$). Instead, we show that a GPU\u2019s cost efficiency is strongly impacted by three key LLM service characteristics: 1. Request Size: An LLM request\u2019s size is made up of its input and output token lengths. For small request sizes, we find that lower-end GPUs produce greater T/$ than their high-end GPU counterparts. Deployment expenses can be reduced by employing cheaper GPUs for smaller requests while reserving costly, high-capacity GPUs for handling larger request sizes. 2. Request Rate: To reduce resource waste, provisioned GPU capacity should align with request volume. An expensive under-utilized GPU exhibits lower T/$ than a cheaper GPU that still meets service demand. Therefore, at low request rates, services can reduce costs by right-sizing from expensive high-end GPUs to cheap low-end GPUs. At higher request rates, leveraging a mix of GPU types facilitates finer-grained resource scaling to better match request volume. 3. Service-level Objective: Services typically establish latency SLOs to ensure high service quality, with the specific SLO varying according to the service\u2019s interactivity needs. In general, low-end GPUs incur higher latency than high-end GPUs. As a result, low-end GPUs may only meet tight SLOs at a low output token rate (or not at all), severely limiting achieved T/$. Thus, high-end GPUs are often required for stringent latency SLOs, whereas low-end GPUs can reduce costs in loose-SLO settings. Consequently, we find that the under-appreciated heterogeneity of GPUs presents opportunities for increasing GPU cost efficiency and significantly reducing LLM service costs. Consider combining the three observations above into a single service deployment: high-cost A100s may be necessary to serve large requests within SLO requirements, however, low-cost A10Gs can meet the latency deadline for small requests at higher T/$, reducing overall cost. Then, during periods of low service activity, the even cheaper L4 can maintain service availability at the lowest cost. The key challenge, then, is to navigate the diversity of request sizes, request rates, latency SLOs, and GPU instance types to find the optimal GPU selection for a given LLM service. In this paper, we present M\u00b4 elange2, a framework that maximizes GPU cost efficiency by automatically and efficiently navigating the heterogeneity of GPUs and LLM service specifications to derive the best GPU provisioning strategy. M\u00b4 elange\u2019s strength stems from its heterogeneity-awareness, that is, its knowledge of how diverse LLM service characteristics impact the cost efficiency of each GPU type. M\u00b4 elange takes as input the service workload profile, latency SLO, and set of GPU type options, and produces the GPU allocation that minimizes deployment costs while attaining SLO. We formulate the task of GPU selection as a cost-aware bin-packing problem where bins are GPUs with an associated capacity and cost, and items are request slices defined by a request size and rate, and solve the bin-packing problem with an off-the-shelf integer linear programming (ILP) solver. M\u00b4 elange can be easily extended to include new GPU types (or other hardware) and alternative definitions of SLO, flexibly supporting diverse LLM service deployments. We evaluate M\u00b4 elange across four GPU types (L4, A10G, A100, and H100), three datasets with varying request size distributions, and a range of request rates and SLOs. Compared to using only a single GPU type, M\u00b4 elange\u2019s heterogeneity-aware mixed-GPU-type approach achieves 9-77% cost reduction in short-context workloads (interactive chats), 2-33% in long-context workloads (document-based tasks), and 4-51% in mixedcontext workloads (both in a single service). M\u00b4 elange efficiently derives GPU allocations within 1.2 seconds, and attains SLO for > 99.95% of requests at a loose SLO, and > 99.5% at a tight SLO. In summary, this paper makes the following contributions: \u2022 We present an extensive analysis of GPU cost efficiency and identify three key LLM service characteristics as significant determinants of GPU cost efficiency: request size, request rate, and latency SLO (\u00a7 4). \u2022 We introduce M\u00b4 elange to efficiently select the most cost-efficient set of GPU instances for a given LLM deployment, while ensuring that the resulting allocation satisfies a prescribed SLO requirement (\u00a7 5). \u2022 We evaluate M\u00b4 elange\u2019s efficacy, demonstrating its significant cost reductions (up to 77%) across a range of real-world workloads, GPU types, and SLO constraints (\u00a7 6). 2 Type L4 A10G (PCIe) A100-80G (SXM) H100 (SXM) On-demand Price ($/h) 0.7 1.01 3.67 7.5163 Instance Provider GCP AWS Azure RunPod Instance Name g2-standard-4 g5.xlarge NC24ads A100 v4/N.A. N.A. Memory (GB) 24 24 80 80 Memory Bandwidth (GB/s) 300 600 1935 3350 FP16 (TFLOPS) 242 125 312 1979 Table 1: Specifications of four NVIDIA GPUs: L4, A10G, A100, and H100. 3.1 LLM Request Size Variance Unlike traditional machine learning workloads, LLM tasks exhibit significant variance in request sizes, or input and output lengths. For example, ResNet [16] requires a fixed-dimension input (image size) and results in a fixed-dimension output (classification size). Conversely, transformer-based language models are flexible to support variable-length prompts and produce variable-length generation sequences, as in the Chatbot Arena dataset [53] derived from a real-world LLM chatbot service. Figure 10 illustrates the request size distributions of Chatbot Arena, demonstrating the extensive diversity of request sizes in practical scenarios. Unsurprisingly, high variance in request sizes introduces significant variation in request latency. As illustrated in Figure 1, request latency can increase by 110\u00d7 when the input/output length expands from 25 tokens to 2000 tokens for the Llama2-7B model served on an A100 GPU. Consequently, it becomes crucial to recognize that LLM requests, unlike non-transformer models, impose varied loads on GPU resources. 2M\u00b4 elange is the French word for \u201cmixture\u201d 3H100\u2019s hourly pricing was computed as described in the Hardware section above. 3 (a) LLaMA-7B 85X (b) LLaMA-70B Figure 1: Request latency of different input/output lengths on A100-80G. 3.2 Unknown Output Length In most online services, an LLM request\u2019s output length is not known a priori. In this paper, we evaluate GPU cost efficiency based on both input and output lengths. We do this to develop a holistic understanding of GPU cost efficiency, but M\u00b4 elange\u2019s GPU provisioning decision does not require specific knowledge of the output lengths of individual requests. Instead, it relies only on an estimated distribution of request sizes. We believe it is a fair assumption that a service\u2019s GPU allocator is given a distribution of expected request sizes based on the historical data of previously served input and output lengths. Because output lengths are a significant contributor to the load of individual requests, unknown output lengths are primarily a challenge for the load balancer, not the allocator. While important, the task of output length prediction for load balancing is orthogonal to M\u00b4 elange. Therefore, to evaluate the efficacy of M\u00b4 elange\u2019s GPU allocations, we use a load balancer that assumes knowledge of output lengths. We are actively working to remove this assumption by exploring load balancers based on output length prediction. There are several prior works that perform online LLM output length prediction with high accuracy [19, 55], but they have not been applied to load balancing. To the best of our knowledge, there is no load balancer that addresses the problem of unknown output lengths, and we believe this to be an promising area of future work. 4 GPU Cost Efficiency Analysis In this section, we analyze GPU cost efficiency in the context of LLM serving. We first describe our key definitions (\u00a7 4.1), then evaluate the cost efficiency of serving Llama2-7b on two widely used GPUs, NVIDIA\u2019s A100 [34] and A10G [33] to show that GPU cost efficiency is significantly influenced by request size(\u00a7 4.2), request latency SLO(\u00a7 4.3), and request rate(\u00a7 4.4). Finally, we validate the generality of our findings by extending our investigation to include additional hardware, specifically NVIDIA\u2019s H100 and L4 GPUs, and a larger model variant, Llama2-70B (\u00a7 4.5). For clarity, the plots are tagged with the request size, request rate, and SLO used to generate the plot. In each setting, we use vLLM-0.2.7 as the inference engine [21]. Results can differ across versions. 4.1 Definitions Service-level Objective (SLO). As in prior work [21, 56, 51], we use the average Time Per Output Token (TPOT) as our Service-level Objective (SLO). Average TPOT is determined by dividing total request latency by the number of generated tokens. In general, SLOs are application dependent: in-line code editors (e.g., GitHub Copilot [29]) require tight latency deadlines to suggest real-time code additions, whereas text summarization services may permit additional processing time to generate concise and accurate summaries for large documents. There are other popular definitions of SLO, such as time to first token and total request 4 (a) Equivalent input and output lengths (b) Input and output lengths vary independently Figure 2: Figure (a) depicts A10G and A100\u2019s relative T/$ across request sizes. Figure (b) expands (a) into separate input and output length dimensions. Tile colors indicate which GPU achieved higher T/$, and values represent the most cost efficient GPU\u2019s percent increase of T/$ relative to the less cost efficient GPU. latency. To simplify our discussion, we use only TPOT, however, M\u00b4 elange is flexible to support alternative definitions of SLO. Cost Efficiency Metric. We use tokens per dollar (T/$) to measure GPU cost efficiency, which is calculated by summing input and output token lengths served within some time period, and dividing the sum by the GPU\u2019s rental cost for the same period. The resulting value enables us to directly compare cost efficiency across GPU instance types with different rental costs. Pricing inference based on token lengths is a common practice in LLM services [36, 12], but some services set different prices for input and output tokens. We only compare T/$ between GPUs in settings where the request sizes are the same, so we do not lose generality to such cost models. In settings where request sizes differ, we report the overall cost of the GPU allocation that meets the aggregate workload. In general, we derive T/$ based on profiling a GPU at maximum saturation. When an SLO is specified, T/$ is calculated by finding the highest GPU saturation at which average TPOT still meets the SLO requirement. 4.2 Request Size and Cost Efficiency We now show that request sizes, shown to be widely varying (\u00a7 3.1), dramatically affect GPU cost efficiency. We served Llama2-7b on A100 and A10G (specifications reported in Table 1), and derived each GPU\u2019s T/$ at maximum GPU saturation across a range of request sizes, with results in Figure 2a. Interestingly, neither GPU achieves highest T/$ across the entire request size space. Instead, each GPU achieves greater cost efficiency within distinct regions of the request size spectrum. For smaller request sizes, A10G exhibits up to 2.6\u00d7 greater T/$ than A100. Conversely, for larger request sizes, A100 achieves up to 1.5\u00d7 the cost efficiency of A10G. We extend this exploration to include both input and output lengths in Figure 2b to observe how they affect cost efficiency separately. We find that the two dimensions influence cost efficiency in a similar manner: smaller sizes benefit A10G, and larger sizes are best served on A100. Once again, there exists a clear boundary within the input/output length spectrum where the cost efficiency advantage shifts from A10G to A100 as request sizes increase. In fact, selecting a single GPU type to serve requests across the entire request size space misses opportunities to produce up to 72% more output tokens for the same cost. Source of Cost Efficiency Variation. Digging deeper into why request size impacts relative cost efficiency between GPUs, we find that it is largely due to the heterogeneity of GPU hardware. Given that batch size directly influences throughput (i.e., request processing rate), we inspect the source of cost efficiency variation by examining the effect of request size on achieved batch size. Figure 3 depicts the absolute batch sizes and 5 (a) Absolute batch sizes (b) Dollar-normalized batch sizes Figure 3: (a) depicts the absolute batch sizes of A10G and A100 serving Llama2-7b at maximum saturation, (b) reports the same batch sizes divided by GPU cost, plotting with respect to A10G. batch sizes normalized by instance cost of each GPU at maximum saturation. Note that Figure 3b closely resembles Figure 2a\u2019s plot of relative T/$ at maximum saturation, verifying that batch size indeed serves as a proxy for throughput. A10G and A100 have similar dollar-normalized batch sizes at 250 input/output tokens, but as the request size increases to 2000 input/output tokens, A10G\u2019s absolute batch size decreases by a factor of 9\u00d7 whereas A100\u2019s only decreases by 6\u00d7 due to its superior memory size and bandwidth. As a result, A100\u2019s cost efficiency advantage over A10G increases accordingly with the increase in request size. In contrast, reducing the request size from 250 to 25 input/output tokens sees A10G\u2019s batch size expanding by 15.2\u00d7, whereas A100\u2019s growth is more modest at 5.89\u00d7. We find that this difference is primarily due to the interference of mixing prefill and decode phases of a greater number of requests, as demonstrated in prior work [17]. Because A100\u2019s batch sizes are larger in absolute terms, A100 is more significantly constrained by per-request latency overheads than A10G is. As a result, A10G\u2019s dollar-normalized batch size exceeds A100\u2019s at short request lengths, leading to greater overall T/$ for A10G. This illustrative case demonstrates how the interaction between request size and achieved T/$ can be subtle, and creates a cost efficiency trade-off space among GPU types. Key Takeaways: GPU cost efficiency is highly dependent on the sizes of requests served. Within the request size space, there are regions where serving with different GPU types is the most cost-effective. In general, lower-end GPUs are more cost-effective for small request sizes whereas higher-end GPUs are best for large request sizes. 4.3 SLO and Cost Efficiency In this section, we show the impact of SLO on cost efficiency. We measure T/$ by finding the maximum saturation of each GPU while average TPOT remains below SLO, and repeat this across several TPOT deadlines (40ms to 120ms) as shown in Figure 4. Under tight SLO constraints (<60ms), A100 demonstrates significantly greater T/$ than A10G (> 2\u00d7) due to A10G\u2019s higher processing latency, which severely limits its output token rate. However, as the SLO is gradually loosened (60-120ms), A10G\u2019s higher latency is less problematic, dramatically increasing its T/$ and surpassing that of A100 (by > 40%). In general, when SLO is stringent, high-end low-latency GPUs are the most viable option because cheaper high-latency GPUs are unable to meet the steep performance requirements. Loosening the SLO increasingly permits the use of cheaper GPUs that can meet the reduced performance requirements at much lower cost. Further, Figure 5 highlights the interplay between SLO and request size to show that neither can be considered in isolation when determining cost efficiency. Varying the latency SLO adjusts the boundary in the request size space between which different GPU types are more cost effective, and also impacts the degree to which 6 Figure 4: T/$ comparison between A10G and A100 across a range of TPOT SLO parameters. Figure 5: Relative increase in T/$ when combining SLO and request size. Shaded areas indicate regions where A10G fails to satisfy the specified SLO. one GPU is more cost effective than the other. For example, with a 40-50ms SLO, A100 always has higher T/$ (by up to 123%). At 70ms, A10G shows modest benefit over A100 for small request sizes. And at 100-120ms, A10G demonstrates much greater T/$ advantage over A100 for the same request sizes (up to 61%). Key Takeaways: To comply with strict SLOs, expensive GPUs are often necessary due to the increased latency of cheaper GPUs. However, as SLO is loosened, low-end GPUs can be used to cut deployment costs. 4.4 Request Rate and Cost Efficiency In this section, we show how request rates influence which GPU, or set of GPUs, is the most cost-effective. Figure 6 illustrates the cost of serving Llama2-7b for a range of request rates using three provisioning strategies: only A10Gs, only A100s, or a mix of both A10Gs and A100s. The y-axis is absolute cost instead of T/$ because each provisioning strategy serves the same request rates and thus the same output tokens; only the cost varies across strategies. As the request rate increases, A100-only is increasingly more cost-effective relative to A10G-only. This is because the requests in this plot were of size [1000 in tokens, 250 out tokens], which \u00a7 4.2 shows is more cost effective on A100. However, even in this case, the A10G-only strategy still presents benefits at low request rates (0 \u22121.5 req/s). Idle periods of low activity are common in real-world services, and the GPU deployment should right-size to the cheaper GPU (here, A10G) when a higher-end GPU (here, A100) is drastically under-utilized. Further, a notable finding is that a hybrid deployment approach, combining both A10G and A100 GPUs, yields the greatest cost efficiency. Because A100s have such large capacity, scaling with only A100s is coarse-grained and leads to under-utilized resources. Instead, A10Gs and A100s can be mixed such that A100s satisfy the bulk of the service demands, while A10Gs handle the remaining load at reduced cost. Key Takeaways: Provisioning a mix of GPU types enables finer-grained resource scaling decisions, which boosts cost efficiency by better utilization of the provisioned instances. At low request rates, LLM deployments should right-size to cheaper low-end GPUs instead of under-utilizing expensive high-capacity GPUs. At higher request rates, a mix of GPU types can be used to better match request load. 4.5 Other Models and Hardware In this section, we demonstrate the generality of our findings by including additional GPU types and a larger model variant (Llama2-70b) to our analysis. In Figure 8, we present relative cost efficiency across four types 7 of GPUs, and observe a progression of the most cost efficient GPU from L4 to A10G, then A100, and finally H100 as the input/output lengths extend. This pattern underscores the advantage of high-end GPUs for processing longer context and output lengths, while low-end GPUs emerge as more cost-effective for shorter input/output scenarios. Similar trends are observed with the Llama2-70B model when comparing the H100 and A100 GPUs, as detailed in Figure 7, reinforcing these insights. Key Takeaways: The effects of request size on GPU cost efficiency (\u00a7 4.2) generalize to settings with several GPU types and larger model sizes, and similarly leads to significant GPU cost efficiency variations in the request size space. Figure 6: Aggregate GPU hourly rental cost at different request rates. A mix of A100 and A10G consistently achieves the lowest cost. Figure 7: T/$ comparison between H100x2 and A100x2 serving Llama2-70b. 5 M\u00b4 elange: Automating Cost-Efficient GPU Selection Building on the analysis in Section 4, we present M\u00b4 elange, a framework that automates the selection of GPU instances to meet an LLM service\u2019s demand at minimal cost while adhering to SLO constraints. We frame the GPU selection task as a cost-aware bin-packing problem with GPUs as bins and requests as items, and employ Integer Linear Programming (ILP) to derive the solution. 5.1 Problem Formulation We begin by defining the key terms utilized in our problem formulation and solution. Workload: A workload is characterized by its overall request rate along with a distribution of input and output sizes. Given the inherent variability in request sizes, it is crucial to treat the input and output sizes not as fixed values, but as distributions spanning a range of possible lengths. Specifically, as illustrated in Figure 9, a workload is a histogram where each bucket corresponds to a range of request sizes and a bucket\u2019s value is the request rate of requests within the bucket\u2019s size range. Deployment Cost: Cost is computed by summing the hourly rates for each of the selected instances. SLO: We use average TPOT to define SLO, however, M\u00b4 elange can be extended to other definitions of SLO, such as time to first token (TTFT), by profiling maximum T/$ within SLO constraints described in \u00a7 4.1 for any given latency constraint definition. Problem Definition: Given a workload, GPU instance costs, and SLO requirements, our objective is to provision GPUs that can serve the workload at minimal cost while adhering to latency SLO constraints. 8 (a) Best GPU relative to second best GPU (b) Best GPU relative to worst GPU Figure 8: Cost efficiency comparison across four GPUs. Tile colors indicate which GPU achieves greatest T/$ at max saturation for the respective request size. Tile values in (a) are the percent increase in T/$ of the best GPU compared to the second best. Tile values in (b) compare the best GPU to the worst GPU. Black boxes indicate request sizes for which only A100 and H100 are compared because A10G and L4 have too small memory capacity to handle a single request within this size, with more detail in \u00a7 6.2 . 3x A10G 2x A100 1x H100 Obj: Minimize Cost Constraint: Meet SLO Figure 9: Workflow illustration depicting the process of segmenting request rates into slices, followed by the allocation of hardware resources based on solver recommendations. 5.2 Allocation Algorithm The intuition of M\u00b4 elange\u2019s solution is to find the minimal-cost set of GPUs (bins) into which the workload (items) can be bin-packed. To do so, our strategy partitions workload buckets into slices, then assigns the slices to GPUs. Our constraints ensure that the load added to each GPU by the assigned slices does not surpass its maximum capacity. The optimization objective is to reduce the total deployment cost. We discuss bucket size considerations (\u00a7 5.2.1), describe slices in more detail (\u00a7 5.2.2), discuss how load is calculated (\u00a7 5.2.3), then finally detail our ILP formulation (\u00a7 5.2.4). 5.2.1 Request Buckets As described in \u00a7 5.1, a workload is represented by a histogram. The histogram has two dimensions, input length and output length, and each bucket\u2019s value is the aggregate request rate for requests within the bucket\u2019s size range. We make the simplifying assumption that the load (see \u00a7 5.2.2) of each request is the same as the largest request size in the same bucket. This simplifies handling diverse request sizes at the cost of over-estimating the load. Bucket sizes can be tuned to reach the desired balance between granularity and solution complexity, but we have not found overall performance to be sensitive to bucket sizes. 9 5.2.2 Slices A naive bin-packing of the workload into GPUs is to assign each bucket to a single GPU. However, the overall load of a single bucket may exceed the capacity of a single GPU, and the bucket may be most cost effectively served by splitting across different GPU types. Therefore, for finer-grained bin-packing, buckets are broken down into slices, which are characterized by a request size and rate. A parameter, slice factor, indicates the number of slices that each bucket is divided into. In a setting with a slice factor of 8 and a bucket corresponding to requests of size [25 \u2212100 in tokens, 25 \u2212100 out tokens] with a request rate of 4 requests/s, the bucket would be segmented into 8 slices each corresponding to a request rate of 0.5 requests. 5.2.3 Load The ILP solver requires an estimate of the load each slice contributes to a GPU to ensure that the load assigned to an instance does not exceed its capacity and violate the latency SLO. The load of a slice with request size s and rate r on GPU G is calculated as r MaxT put(G,s,SLO), where MaxTput(G, s, SLO) is the maximum request/s G can achieve for requests of size s while remaining under the latency deadline SLO. For instance, if MaxTput(G, s, SLO) = 10 reqs/s and r = 1, the load is calculated as 1/10 = 0.1 or 10%. Each GPU\u2019s maximum capacity is defined as 1 (or 100%). This approximation allows us to calculate the aggregate load of requests with differing sizes and rates. Prior work has proposed cost models for LLM requests [32, 41], but there is not yet a definitive formulation. We found our simple approximation to perform well, but it can be easily replaced with alternative cost models. Based on offline profiling, we compute MaxTput(G, s, SLO) for each bucket in the workload histogram. 5.2.4 ILP Formulation We now describe our ILP formulation. We formulate the problem with two primary decision variables. First, let A be a matrix {0, 1}N\u00d7M, where N denotes the total number of slices, and M represents the number of GPU instance types. An element Ai,j within this matrix is set to 1 if slice i is assigned to GPU type j, and 0 otherwise. The second decision variable, B, is a vector of length M, where each element Bj specifies the number of GPUs of type j to be allocated. cj denotes the cost of GPU type j and ri is the request rate of slice i. L is computed offline by the process described in \u00a7 5.2.3, and element Li,j is the percent load of 1 req/s of slice i\u2019s request size on GPU type j. Our objective is to minimize the total GPU allocation cost, with the following mathematical representation: The ILP constraints are as follows. First, each task slice is assigned to exactly one GPU type: Second, for each GPU type, the total number of GPUs designated in vector B must adequately accommodate the cumulative load prescribed to it in matrix A: Lastly, the entries within matrix A are binary, and the elements of vector B are non-negative integers: arg min B ( M X j=1 Bj \u00b7 cj) (1) \u2200i \u2208{1, . . . , N}, M X j=1 Ai,j = 1 (2) \u2200j \u2208{1, . . . , M}, N X i=1 Ai,j \u00b7 Li,j \u00b7 ri \u2264Bj (3) \u2200i \u2208{1, . . . , N}, \u2200j \u2208{1, . . . , M}, Ai,j \u2208{0, 1} (4) \u2200j \u2208{1, . . . , M}, Bj \u22650 (5) Upon resolving equations (1) through (5), the decision variable B holds the minimal-cost set of GPUs that meet the SLO constraint. We use an off-the-shelf solver to solve the ILP problem [30]. 10 6 Evaluation In this section, we assess the performance of M\u00b4 elange using four GPU types across settings of diverse request sizes, rates, and SLOs. Our evaluations show that M\u00b4 elange consistently achieves significant cost savings (up to 77%) compared to single-GPU-type strategies and M\u00b4 elange\u2019s selected GPU allocations successfully attain TPOT SLO for over 99.5% of requests. 6.1 Methodology Hardware Setup. We use four NVIDIA GPUs in our evaluations, with specifications detailed in Table 1. To determine GPU cost, we select the lowest on-demand price available from major cloud providers (AWS, Azure, and GCP). Because on-demand H100 is not offered by these major providers, we defer to the pricing from RunPod [40] due to its popularity and availability. To ensure fair cost comparisons, we normalize RunPod\u2019s H100 pricing to match the pricing structures of major platforms. We calculate this by comparing RunPod\u2019s H100 cost ($4.69) to RunPod\u2019s A100-80G cost ($2.29), then adjusting relative to the A100\u2019s price on major clouds ($3.67), resulting in a normalized price of (4.69/2.29) \u00d7 3.67 = $7.516 for H100. Model and Inference Engine. In each experiment, we serve Llama2-7B (Llama-2-7b-hf) [45] using version 0.2.7 of the vLLM inference engine [21] with default parameters. M\u00b4 elange Parameters. The bucket ranges correspond to Figure 8 and comprise of 10 input length ranges and 6 output length ranges, for a total of 60 buckets. The slice factor is set to 8, for a total of 60 \u00b7 8 = 480 slices. Datasets. We evaluate across three distinct datasets to cover a wide range of application scenarios. The specific input/output length distributions of these datasets are illustrated in Figure 10. \u2022 Short context: This scenario simulates real-time conversational dynamics by employing the Chatbot Arena dataset (lmsys/lmsys-chat-1m) [53], which is derived from real-world chatbot conversations. The dataset is skewed towards shorter context (< 2000 tokens) because much of the data was generated in conversation with models that did not yet have a larger context window. \u2022 Long context: This scenario represents tasks with extensive input, such as summarization. We utilize the PubMed dataset (ccdv/pubmed-summarization) [7], comprising 133 thousand scientific papers from PubMed.com, a popular dataset for large-scale text summarization studies. \u2022 Mixed long/short context: This scenario captures settings with a combination of long and short context, such as an assistant that engages in succinct dialogue and responds to large document-based queries. To model this, we create a synthetic dataset by sampling 80% of requests from the Arena dataset and 20% of requests from the PubMed dataset. SLOs. We referred to current LLM inference benchmarks [3] to set TPOT SLOs, and opted for 40ms in contexts where swift responses are essential, and 120ms where longer response times are acceptable. Both selected SLOs surpass the average human reading speed, ensuring that our SLOs align with practical user experience considerations. However, as discussed in \u00a7 4.1, M\u00b4 elange is flexible to support alternative definitions of SLO. Baselines. We benchmark against deployments that utilize solely one GPU type. To derive baseline GPU allocations, we use the same ILP formulation from \u00a7 5.2.4 but restrict the solver to only a single GPU type. 6.2 Cost Savings Analysis We first compare the overall deployment costs of M\u00b4 elange\u2019s allocation compared to the single-GPU-type baselines for each dataset and SLO across a range of request rates (1-32 requests/s). Figure 11 displays all costs normalized against the cost of the A100-only strategy (shown in blue dotted lines), and the detailed GPU allocations are included in Appendix A.1. A10G-only and L4-only provisioning strategies are only 11 0 2500 5000 7500 10000 12500 Input Length (tokens) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Fraction Dataset Mixed (mean=1278.04) Arena (mean=329.43) Pubmed (mean=4174.13) (a) Input length distributions. 0 250 500 750 1000 Output Length (tokens) 0.000 0.025 0.050 0.075 0.100 0.125 Fraction Dataset Mixed (mean=219.87) Arena (mean=195.66) Pubmed (mean=314.1) (b) Output length distributions. Figure 10: Dataset input and output length distributions. included in the Arena dataset analysis because of PubMed and Mixed datasets\u2019 large requests. The key-value cache generated from even a single large request (\u223c12000+ tokens) exceeds the memory capacity of L4 and A10G (24GB). In M\u00b4 elange\u2019s allocation, L4 and A10G are included but restricted to only serve requests of size less than 12000 tokens. Loose SLO: 120ms Figures 11a, 11c, and 11e depict results with a loose 120ms TPOT SLO. M\u00b4 elange\u2019s mixed-GPU allocation is consistently the most cost-efficient approach, achieving cost reductions of up to 77%, 33% and 51% across the three evaluated datasets. \u2022 Arena Dataset. In Figure 11a, M\u00b4 elange achieves 15-77% cost savings. Lower-tier GPUs such as A10G/L4 offer superior cost efficiency in comparison to A100/H100 when handling lower request rates. In particular, for 1-2 requests/s, H100 has egregiously high cost because the load is not enough to even saturate a single GPU. Yet, as request rate increases, A10G/L4\u2019s cost advantage diminishes as high-capacity GPUs become a more reasonable choice. This aligns with findings in \u00a7 4.4 that emphasize matching GPU size with request rate. Note, however, that A10G/L4 are still competitive with A100 at higher request rates due to their T/$ advantage for smaller request sizes. \u2022 PubMed Dataset. In Figure 11c, M\u00b4 elange achieves 15-33% cost savings. H100\u2019s cost efficiency generally outperforms A100\u2019s, attributable to the dataset\u2019s longer context lengths for which H100 achieves higher T/$. However, there are still many request sizes for which A100 is the best, and this creates the opportunity for the mixed-GPU strategy to squeeze up to 25% cost savings. \u2022 Mixed Dataset. In Figure 11e, M\u00b4 elange achieves 13-51% cost savings. A100\u2019s cost efficiency is boosted relative to the PubMed dataset due to it being generally more cost efficient than H100 for small request sizes. This distinction highlights how the nature of the workload \u2014 specifically the variance in request lengths \u2014 can significantly influence relative cost efficiency across GPU types. Strict SLO: 40ms Figures 11b, 11d, and 11f depict the results from tightening the TPOT SLO to 40ms. Once again, M\u00b4 elange achieves the lowest cost in all settings, with up to 68%, 22%, and 51% reduction across the three evaluated datasets. \u2022 Arena Dataset In Figure 11b, M\u00b4 elange achieves 9-68% cost savings. A10G/L4 display considerably higher relative cost than in the loose SLO setting (Figure 11a). This is explained by A10G/L4\u2019s higher latency, which requires many more instances to be provisioned in order to meet the tight SLO deadline. M\u00b4 elange\u2019s mixed-GPU strategy is able to adapt to the strict SLO and provision mostly A100/H100\u2019s which exhibit much lower latencies. \u2022 PubMed Dataset In Figure 11d, M\u00b4 elange achieves 2-22% cost savings. H100 achieves a significant cost advantage over A100, especially relative to the 120ms setting ( 11c). H100 generally achieves lower latency than A100, making it the preferred option for long-context tight-SLO settings. 12 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 A10G L4 Mix (a) Short context: Arena, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 3 Cost (w.r.t A100) H100 A100 A10G L4 Mix (b) Short context: Arena, SLO = 40ms. 1 2 4 8 16 32 Request Rate (req/s) 0.0 0.5 1.0 Cost (w.r.t A100) H100 A100 Mix (c) Long context: PubMed, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0.0 0.5 1.0 Cost (w.r.t A100) H100 A100 Mix (d) Long context: PubMed, SLO = 40ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 Mix (e) Mixed long/short context, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 Mix (f) Mixed long/short context, SLO = 40ms. Figure 11: Deployment cost across different datasets and SLOs. \u2022 Mixed Dataset In Figure 11f, M\u00b4 elange achieves 4-51% cost savings. A100 gains back some advantage over H100 relative to the PubMed setting due to the prevalence of shorter-context requests. Experiment Takeaways In loose SLO settings, M\u00b4 elange can utilize all GPU types (both lowand high-end) to serve request sizes for which they achieve greatest T/$ and closely match capacity to the request volume, significantly reducing costs (up to 77%). In tight SLO settings, A10G and L4 are less beneficial due to their high latency, reducing the cost savings M\u00b4 elange can achieve relative to single-GPU-type strategies. However, even in this setting, M\u00b4 elange squeezes large cost savings (up to 67%) based on the same principles. These evaluations highlight the key benefits of exploiting GPU heterogeneity in a unified allocation strategy: 1) GPU types can serve request sizes for which they have greatest T/$, 2) mixing GPU types enables fine-grained provisioning to closely match capacity to request volume, and 3) the allocation strategy can adapt to differing SLO stringency levels and continue to utilize the benefits of (1) and (2). In summary, M\u00b4 elange efficiently navigates the diversity of request sizes, rates, SLOs, and GPU types to automatically find the best GPU allocation and significantly reduce deployment cost. 6.3 SLO Satisfaction Next, we assess M\u00b4 elange\u2019s ability to select GPU allocations that meet the specified TPOT SLO. To do so, we provision actual cloud GPU instances based on M\u00b4 elange\u2019s selected allocation for each of the six experiment 13 Figure 12: Experiment TPOT CDFs. Figure 13: TPOT CDF from unknown output length experiment. settings in 6.2 at 4 requests/s. We deploy Llama2-7b with vLLM-0.2.7 on each of the provisioned GPUs. We sample request sizes randomly from the chosen dataset to serve 2000 live requests. We record the latency of each request and divide by output token length to derive average TPOT. Load Balancer. Most settings use multiple GPU instances, requiring a load balancer to distribute requests across them. The problem of load balancing variable-size requests to heterogeneous backends has been previously explored [11], and we leave it to future work to create adaptations for serving LLMs on heterogeneous GPUs. We instead use a simple variation of Join Shortest Queue (JSQ) routing [14]: the load balancer tracks outstanding requests for each GPU, and converts them to percent load as described in \u00a7 5.2.3. Upon receiving a new request, the load balancer chooses a GPU backend such that the resulting percent load on the chosen GPU is minimized relative to choosing any other GPU. This policy performed well in our experiments, but we expect that improvements to the load balancing policy will reduce tail latency. Results. Figure 12 presents CDFs of the observed average TPOTs across experiments. With an SLO of 120ms, over 99.95% of requests met SLO. When the SLO was tightened to 40ms, SLO adherence reduced to over 99.5% of requests. M\u00b4 elange effectively chose GPU allocations that reduce cost while adhering to latency objectives, however, we recognize that services may require even higher SLO adherence, so we investigated the source of SLO violations in our experiment. SLO Violation Investigation. Of all requests that violated TPOT SLO, we found that 84% failed to meet SLO due to one of two reasons: request rate bursts or co-location with large requests. In our experiments, requests are sent according to a Poisson process, which occasionally creates short-lived bursts that overload the GPU capacity. Further, we choose the size of model request by randomly sampling from the configured dataset. Occasionally, several large requests are chosen in sequence, which can temporarily exceed the service capacity. In an online production environment, it is common practice to over-provision resources in order to absorb such bursts and other load variations. Within our framework, a desired over-provisioning rate (say, 20%) can be achieved by increasing the request rate input to the solver by the same proportion (20%). We discuss the future work of practically deploying a system based on M\u00b4 elange in \u00a7 7. 6.4 Unknown Output Length As discussed in \u00a7 3.2, in order to focus on measuring the quality of M\u00b4 elange\u2019s chosen GPU allocation, our evaluations utilize a load balancing policy that knows output lengths. Given that this is not a realistic assumption, we briefly evaluate M\u00b4 elange\u2019s performance with a simple load balancing policy that is unaware of output lengths. We again note that we are actively working on addressing the limitations of unknown output lengths, and believe that LLM-specific load balancing that addresses this challenge is an exciting area for future work. We repeated the SLO satisfaction experiment (\u00a7 6.3) on the Arena dataset with a TPOT SLO of 40ms, but restricted the load balancer to only see request input lengths. The load balancer estimates output length by computing the average of all previous requests\u2019 output lengths. Otherwise, load balancing is 14 Request Rate Arena, SLO=120ms Arena, SLO=40ms PubMed, SLO=120ms PubMed, SLO=40ms Mix, SLO=120ms Mix, SLO=40ms 1 0.137 0.177 0.232 0.295 0.168 0.336 2 0.194 0.265 0.234 0.334 0.253 0.381 4 0.192 0.346 0.287 0.381 0.297 0.459 8 0.248 0.433 0.269 0.384 0.321 0.545 16 0.299 0.448 0.389 0.509 0.439 0.537 32 0.316 0.494 0.791 0.96 0.912 1.14 Table 2: Solver execution time. performed identically to experiments in \u00a7 6.3. Figure 13 presents the experiment\u2019s TPOT CDF. Only 97.2% of requests met the 40ms deadline, compared to 99.5% in the setting where output length is known, a 5.6\u00d7 increase in SLO violations. Almost all (91%) of the additional SLO violations were due to large requests landing on a lower-end GPU that would have otherwise landed on a higher-end GPU if the output length was known. This result demonstrates that errors in estimating output length can manifest as increased tail latency due to poor load balancing decisions, further motivating future work on load balancing for LLMs. Nevertheless, we show that over-provisioning can account for the error in predicting output lengths. We re-ran the experiment, but inflated M\u00b4 elange\u2019s request rate input by 5%, and observed that SLO adherence jumped back up to over 99.5%. 6.5 Solver Time We present the solver execution time from each experiment in Table 2. Across all datasets and request rates, the solver\u2019s execution time remains under 1.2 seconds, which is negligible compared to workload execution time. We observe a modest increase in solver execution time with higher request volumes, attributed to the greater complexity in slice assignment due to a greater number of required GPUs. However, this increase is sub-linear relative to the increase in request rate, and the solver\u2019s execution time remains practical. Further, the execution of the solver is a one-time event. Users are required to run the solver only prior to deployment or when there is a significant change in the distribution of request sizes or rates. 7 Future Work There are several interesting directions related to leveraging heterogeneous GPUs for LLM serving. First, adapting heterogeneity-aware load balancing policies specifically for LLM systems where output length is unknown could reduce tail latency that occur due to poor balancing decisions. Further, we believe that generative models beyond LLMs, including image generation, video generation, and embedding models, each of which could be benefited by heterogeneous serving systems. Finally, M\u00b4 elange effectively derives the best GPU allocation for a fixed workload distribution and request rate, but does not address other challenges of deploying a live LLM service such as handling GPU unavailability or responding to dynamically changing request rate and request size distribution. 8 Conclusion In this study, we conduct an analysis of GPU cost efficiency in LLM service deployments, and identify three key factors (request sizes, request rates, and Service Level Objectives (SLOs)) that significantly impact GPU cost efficiency. Based on these findings, we introduce M\u00b4 elange, a framework for deriving the minimal-cost GPU allocation that attains SLO for a given LLM service specification. We frame the task of GPU selection as a cost-aware bin-packing problem and formulate it as an integer linear program. Through evaluations on a range of GPUs, request sizes, request rates, and latency SLOs, M\u00b4 elange consistently demonstrates significant reductions in deployment costs (up to 77%) while providing high SLO attainment. 15" + }, + { + "url": "http://arxiv.org/abs/2211.10438v7", + "title": "SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models", + "abstract": "Large language models (LLMs) show excellent performance but are compute- and\nmemory-intensive. Quantization can reduce memory and accelerate inference.\nHowever, existing methods cannot maintain accuracy and hardware efficiency at\nthe same time. We propose SmoothQuant, a training-free, accuracy-preserving,\nand general-purpose post-training quantization (PTQ) solution to enable 8-bit\nweight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that\nweights are easy to quantize while activations are not, SmoothQuant smooths the\nactivation outliers by offline migrating the quantization difficulty from\nactivations to weights with a mathematically equivalent transformation.\nSmoothQuant enables an INT8 quantization of both weights and activations for\nall the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG,\nLlama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x\nspeedup and 2x memory reduction for LLMs with negligible loss in accuracy.\nSmoothQuant enables serving 530B LLM within a single node. Our work offers a\nturn-key solution that reduces hardware costs and democratizes LLMs. Code is\navailable at https://github.com/mit-han-lab/smoothquant.", + "authors": "Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han", + "published": "2022-11-18", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.01188v2", + "title": "Petals: Collaborative Inference and Fine-tuning of Large Models", + "abstract": "Many NLP tasks benefit from using large language models (LLMs) that often\nhave more than 100 billion parameters. With the release of BLOOM-176B and\nOPT-175B, everyone can download pretrained models of this scale. Still, using\nthese models requires high-end hardware unavailable to many researchers. In\nsome cases, LLMs can be used more affordably via RAM offloading or hosted APIs.\nHowever, these techniques have innate limitations: offloading is too slow for\ninteractive inference, while APIs are not flexible enough for research that\nrequires access to weights, attention or logits. In this work, we propose\nPetals - a system for inference and fine-tuning of large models collaboratively\nby joining the resources of multiple parties. We demonstrate that this strategy\noutperforms offloading for very large models, running inference of BLOOM-176B\non consumer GPUs with $\\approx$ 1 step per second, which is enough for many\ninteractive LLM applications. Unlike most inference APIs, Petals also natively\nexposes hidden states of served models, allowing to train and share custom\nmodel extensions based on efficient fine-tuning methods.", + "authors": "Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel", + "published": "2022-09-02", + "updated": "2023-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.00774v3", + "title": "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot", + "abstract": "We show for the first time that large-scale generative pretrained transformer\n(GPT) family models can be pruned to at least 50% sparsity in one-shot, without\nany retraining, at minimal loss of accuracy. This is achieved via a new pruning\nmethod called SparseGPT, specifically designed to work efficiently and\naccurately on massive GPT-family models. We can execute SparseGPT on the\nlargest available open-source models, OPT-175B and BLOOM-176B, in under 4.5\nhours, and can reach 60% unstructured sparsity with negligible increase in\nperplexity: remarkably, more than 100 billion weights from these models can be\nignored at inference time. SparseGPT generalizes to semi-structured (2:4 and\n4:8) patterns, and is compatible with weight quantization approaches. The code\nis available at: https://github.com/IST-DASLab/sparsegpt.", + "authors": "Elias Frantar, Dan Alistarh", + "published": "2023-01-02", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.00978v4", + "title": "AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration", + "abstract": "Large language models (LLMs) have fundamentally transformed the capabilities\nof numerous applications, from natural language processing to more intricate\ndomain-specific tasks in robotics and autonomous driving. Moreover, the\nimportance of on-device LLMs has grown significantly in the recent years.\nRunning LLMs on edge devices not only promises reduced latency and improved\nuser experience but also aligns with the increasing need for user privacy, as\ndata processing can occur locally. However, the astronomical model sizes of\nmodern LLMs and constraints of the edge devices, primarily in terms of memory\nsize and bandwidth, pose significant deployment challenges. In this paper, we\npropose Activation-aware Weight Quantization (AWQ), a hardware-friendly\napproach for LLM low-bit weight-only quantization. Our method is based on the\nobservation that weights are not equally important: protecting only 1% of\nsalient weights can greatly reduce quantization error. We then propose to\nsearch for the optimal per-channel scaling that protects the salient weights by\nobserving the activation, not weights. AWQ does not rely on any backpropagation\nor reconstruction, so it can well preserve LLMs' generalization ability on\ndifferent domains and modalities, without overfitting to the calibration set.\nAWQ outperforms existing work on various language modeling and domain-specific\nbenchmarks (coding and math). Thanks to better generalization, it achieves\nexcellent quantization performance for instruction-tuned LMs and, for the first\ntime, multi-modal LMs. Alongside AWQ, we implement TinyChat, an efficient and\nflexible inference framework tailored for on-device LLM/VLMs, offering more\nthan 3x speedup over the Huggingface FP16 implementation on both desktop and\nmobile GPUs. It also democratizes the deployment of the 70B Llama-2 model on\nmobile GPUs.", + "authors": "Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, Song Han", + "published": "2023-06-01", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.16369v1", + "title": "SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills", + "abstract": "Large Language Model (LLM) inference consists of two distinct phases -\nprefill phase which processes the input prompt and decode phase which generates\noutput tokens autoregressively. While the prefill phase effectively saturates\nGPU compute at small batch sizes, the decode phase results in low compute\nutilization as it generates one token at a time per request. The varying\nprefill and decode times also lead to imbalance across micro-batches when using\npipeline parallelism, resulting in further inefficiency due to bubbles.\n We present SARATHI to address these challenges. SARATHI employs\nchunked-prefills, which splits a prefill request into equal sized chunks, and\ndecode-maximal batching, which constructs a batch using a single prefill chunk\nand populates the remaining slots with decodes. During inference, the prefill\nchunk saturates GPU compute, while the decode requests 'piggyback' and cost up\nto an order of magnitude less compared to a decode-only batch. Chunked-prefills\nallows constructing multiple decode-maximal batches from a single prefill\nrequest, maximizing coverage of decodes that can piggyback. Furthermore, the\nuniform compute design of these batches ameliorates the imbalance between\nmicro-batches, significantly reducing pipeline bubbles.\n Our techniques yield significant improvements in inference performance across\nmodels and hardware. For the LLaMA-13B model on A6000 GPU, SARATHI improves\ndecode throughput by up to 10x, and accelerates end-to-end throughput by up to\n1.33x. For LLaMa-33B on A100 GPU, we achieve 1.25x higher end-to-end-throughput\nand up to 4.25x higher decode throughput. When used with pipeline parallelism\non GPT-3, SARATHI reduces bubbles by 6.29x, resulting in an end-to-end\nthroughput improvement of 1.91x.", + "authors": "Amey Agrawal, Ashish Panwar, Jayashree Mohan, Nipun Kwatra, Bhargav S. Gulavani, Ramachandran Ramjee", + "published": "2023-08-31", + "updated": "2023-08-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.09670v2", + "title": "DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving", + "abstract": "DistServe improves the performance of large language models (LLMs) serving by\ndisaggregating the prefill and decoding computation. Existing LLM serving\nsystems colocate the two phases and batch the computation of prefill and\ndecoding across all users and requests. We find that this strategy not only\nleads to strong prefill-decoding interferences but also couples the resource\nallocation and parallelism plans for both phases. LLM applications often\nemphasize individual latency for each phase: time to first token (TTFT) for the\nprefill phase and time per output token (TPOT) of each request for the decoding\nphase. In the presence of stringent latency requirements, existing systems have\nto prioritize one latency over the other, or over-provision compute resources\nto meet both.\n DistServe assigns prefill and decoding computation to different GPUs, hence\neliminating prefill-decoding interferences. Given the application's TTFT and\nTPOT requirements, DistServe co-optimizes the resource allocation and\nparallelism strategy tailored for each phase. DistServe also places the two\nphases according to the serving cluster's bandwidth to minimize the\ncommunication caused by disaggregation. As a result, DistServe significantly\nimproves LLM serving performance in terms of the maximum rate that can be\nserved within both TTFT and TPOT constraints on each GPU. Our evaluations show\nthat on various popular LLMs, applications, and latency requirements, DistServe\ncan serve 4.48x more requests or 10.2x tighter SLO, compared to\nstate-of-the-art systems, while staying within latency constraints for > 90% of\nrequests.", + "authors": "Yinmin Zhong, Shengyu Liu, Junda Chen, Jianbo Hu, Yibo Zhu, Xuanzhe Liu, Xin Jin, Hao Zhang", + "published": "2024-01-18", + "updated": "2024-03-19", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.09213v1", + "title": "Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads", + "abstract": "Specialized accelerators such as GPUs, TPUs, FPGAs, and custom ASICs have\nbeen increasingly deployed to train deep learning models. These accelerators\nexhibit heterogeneous performance behavior across model architectures. Existing\nschedulers for clusters of accelerators, which are used to arbitrate these\nexpensive training resources across many users, have shown how to optimize for\nvarious multi-job, multi-user objectives, like fairness and makespan.\nUnfortunately, existing schedulers largely do not consider performance\nheterogeneity. In this paper, we propose Gavel, a heterogeneity-aware scheduler\nthat systematically generalizes a wide range of existing scheduling policies.\nGavel expresses these policies as optimization problems, making it easy to\noptimize for objectives in a heterogeneity-aware way, while also being\ncognizant of performance optimizations like space sharing. Gavel then uses a\nround-based scheduling mechanism to ensure jobs receive their ideal allocation\ngiven the target scheduling policy. Gavel's heterogeneity-aware policies allow\na heterogeneous cluster to sustain higher input load, and improve end\nobjectives such as average job completion time and makespan by up to 3.5x\ncompared to heterogeneity-agnostic policies.", + "authors": "Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia", + "published": "2020-08-20", + "updated": "2020-08-20", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.18677v1", + "title": "Splitwise: Efficient generative LLM inference using phase splitting", + "abstract": "Recent innovations in generative large language models (LLMs) have made their\napplications and use-cases ubiquitous. This has led to large-scale deployments\nof these models, using complex, expensive, and power-hungry AI accelerators,\nmost commonly GPUs. These developments make LLM inference efficiency an\nimportant challenge. Based on our extensive characterization, we find that\nthere are two main phases during an LLM inference request: a compute-intensive\nprompt computation, and a memory-intensive token generation, each with distinct\nlatency, throughput, memory, and power characteristics. Despite\nstate-of-the-art batching and scheduling, the token generation phase\nunderutilizes compute resources. Specifically, unlike compute-intensive prompt\ncomputation phases, token generation phases do not require the compute\ncapability of the latest GPUs, and can be run with lower power and cost.\n With Splitwise, we propose splitting the two phases of a LLM inference\nrequest on to separate machines. This allows us to use hardware that is\nwell-suited for each phase, and provision resources independently per phase.\nHowever, splitting an inference request across machines requires state transfer\nfrom the machine running prompt computation over to the machine generating\ntokens. We implement and optimize this state transfer using the fast back-plane\ninterconnects available in today's GPU clusters.\n We use the Splitwise technique to design LLM inference clusters using the\nsame or different types of machines for the prompt computation and token\ngeneration phases. Our clusters are optimized for three key objectives:\nthroughput, cost, and power. In particular, we show that we can achieve 1.4x\nhigher throughput at 20% lower cost than current designs. Alternatively, we can\nachieve 2.35x more throughput with the same cost and power budgets.", + "authors": "Pratyush Patel, Esha Choukse, Chaojie Zhang, \u00cd\u00f1igo Goiri, Aashaka Shah, Saeed Maleki, Ricardo Bianchini", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.AR", + "cats": [ + "cs.AR", + "cs.DC", + "I.2.0, I.3.1, C.4" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.13878v1", + "title": "Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism", + "abstract": "Transformer models have achieved state-of-the-art performance on various\ndomains of applications and gradually becomes the foundations of the advanced\nlarge deep learning (DL) models. However, how to train these models over\nmultiple GPUs efficiently is still challenging due to a large number of\nparallelism choices. Existing DL systems either rely on manual efforts to make\ndistributed training plans or apply parallelism combinations within a very\nlimited search space. In this approach, we propose Galvatron, a new system\nframework that incorporates multiple popular parallelism dimensions and\nautomatically finds the most efficient hybrid parallelism strategy. To better\nexplore such a rarely huge search space, we 1) involve a decision tree to make\ndecomposition and pruning based on some reasonable intuitions, and then 2)\ndesign a dynamic programming search algorithm to generate the optimal plan.\nEvaluations on four representative Transformer workloads show that Galvatron\ncould perform automatically distributed training with different GPU memory\nbudgets. Among all evluated scenarios, Galvatron always achieves superior\nsystem throughput compared to previous work with limited parallelism.", + "authors": "Xupeng Miao, Yujie Wang, Youhe Jiang, Chunan Shi, Xiaonan Nie, Hailin Zhang, Bin Cui", + "published": "2022-11-25", + "updated": "2022-11-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DB", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.14135v2", + "title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness", + "abstract": "Transformers are slow and memory-hungry on long sequences, since the time and\nmemory complexity of self-attention are quadratic in sequence length.\nApproximate attention methods have attempted to address this problem by trading\noff model quality to reduce the compute complexity, but often do not achieve\nwall-clock speedup. We argue that a missing principle is making attention\nalgorithms IO-aware -- accounting for reads and writes between levels of GPU\nmemory. We propose FlashAttention, an IO-aware exact attention algorithm that\nuses tiling to reduce the number of memory reads/writes between GPU high\nbandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of\nFlashAttention, showing that it requires fewer HBM accesses than standard\nattention, and is optimal for a range of SRAM sizes. We also extend\nFlashAttention to block-sparse attention, yielding an approximate attention\nalgorithm that is faster than any existing approximate attention method.\nFlashAttention trains Transformers faster than existing baselines: 15%\nend-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the\nMLPerf 1.1 training speed record, 3$\\times$ speedup on GPT-2 (seq. length 1K),\nand 2.4$\\times$ speedup on long-range arena (seq. length 1K-4K). FlashAttention\nand block-sparse FlashAttention enable longer context in Transformers, yielding\nhigher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on\nlong-document classification) and entirely new capabilities: the first\nTransformers to achieve better-than-chance performance on the Path-X challenge\n(seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1%\naccuracy).", + "authors": "Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher R\u00e9", + "published": "2022-05-27", + "updated": "2022-06-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.17192v2", + "title": "Fast Inference from Transformers via Speculative Decoding", + "abstract": "Inference from large autoregressive models like Transformers is slow -\ndecoding K tokens takes K serial runs of the model. In this work we introduce\nspeculative decoding - an algorithm to sample from autoregressive models faster\nwithout any changes to the outputs, by computing several tokens in parallel. At\nthe heart of our approach lie the observations that (1) hard language-modeling\ntasks often include easier subtasks that can be approximated well by more\nefficient models, and (2) using speculative execution and a novel sampling\nmethod, we can make exact decoding from the large models faster, by running\nthem in parallel on the outputs of the approximation models, potentially\ngenerating several tokens concurrently, and without changing the distribution.\nOur method can accelerate existing off-the-shelf models without retraining or\narchitecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration\ncompared to the standard T5X implementation, with identical outputs.", + "authors": "Yaniv Leviathan, Matan Kalman, Yossi Matias", + "published": "2022-11-30", + "updated": "2023-05-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.15566v1", + "title": "SpotServe: Serving Generative Large Language Models on Preemptible Instances", + "abstract": "The high computational and memory requirements of generative large language\nmodels (LLMs) make it challenging to serve them cheaply. This paper aims to\nreduce the monetary cost for serving LLMs by leveraging preemptible GPU\ninstances on modern clouds, which offer accesses to spare GPUs at a much\ncheaper price than regular instances but may be preempted by the cloud at any\ntime. Serving LLMs on preemptible instances requires addressing challenges\ninduced by frequent instance preemptions and the necessity of migrating\ninstances to handle these preemptions.\n This paper presents SpotServe, the first distributed LLM serving system on\npreemptible instances. Several key techniques in SpotServe realize fast and\nreliable serving of generative LLMs on cheap preemptible instances. First,\nSpotServe dynamically adapts the LLM parallelization configuration for dynamic\ninstance availability and fluctuating workload, while balancing the trade-off\namong the overall throughput, inference latency and monetary costs. Second, to\nminimize the cost of migrating instances for dynamic reparallelization, the\ntask of migrating instances is formulated as a bipartite graph matching\nproblem, which uses the Kuhn-Munkres algorithm to identify an optimal migration\nplan that minimizes communications. Finally, to take advantage of the grace\nperiod offered by modern clouds, we introduce stateful inference recovery, a\nnew inference mechanism that commits inference progress at a much finer\ngranularity and allows SpotServe to cheaply resume inference upon preemption.\nWe evaluate on real spot instance preemption traces and various popular LLMs\nand show that SpotServe can reduce the P99 tail latency by 2.4 - 9.1x compared\nwith the best existing LLM serving systems. We also show that SpotServe can\nleverage the price advantage of preemptive instances, saving 54% monetary cost\ncompared with only using on-demand instances.", + "authors": "Xupeng Miao, Chunan Shi, Jiangfei Duan, Xiaoli Xi, Dahua Lin, Bin Cui, Zhihao Jia", + "published": "2023-11-27", + "updated": "2023-11-27", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.05920v1", + "title": "Fast Distributed Inference Serving for Large Language Models", + "abstract": "Large language models (LLMs) power a new generation of interactive AI\napplications exemplified by ChatGPT. The interactive nature of these\napplications demand low job completion time (JCT) for model inference. Existing\nLLM serving systems use run-to-completion processing for inference jobs, which\nsuffers from head-of-line blocking and long JCT. We present FastServe, a\ndistributed inference serving system for LLMs. FastServe exploits the\nautoregressive pattern of LLM inference to enable preemption at the granularity\nof each output token. FastServe uses preemptive scheduling to minimize JCT with\na novel skip-join Multi-Level Feedback Queue scheduler. Based on the new semi\ninformation-agnostic setting of LLM inference, the scheduler leverages the\ninput length information to assign an appropriate initial queue for each\narrival job to join. The higher priority queues than the joined queue are\nskipped to reduce demotions. We design an efficient GPU memory management\nmechanism that proactively offloads and uploads intermediate states between GPU\nmemory and host memory for LLM inference. We build a system prototype of\nFastServe based on NVIDIA FasterTransformer. Experimental results show that\ncompared to the state-of-the-art solution Orca, FastServe improves the average\nand tail JCT by up to 5.1$\\times$ and 6.4$\\times$, respectively.", + "authors": "Bingyang Wu, Yinmin Zhong, Zili Zhang, Gang Huang, Xuanzhe Liu, Xin Jin", + "published": "2023-05-10", + "updated": "2023-05-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.07104v1", + "title": "Efficiently Programming Large Language Models using SGLang", + "abstract": "Large language models (LLMs) are increasingly used for complex tasks\nrequiring multiple chained generation calls, advanced prompting techniques,\ncontrol flow, and interaction with external environments. However, efficient\nsystems for programming and executing these applications are lacking. To bridge\nthis gap, we introduce SGLang, a Structured Generation Language for LLMs.\nSGLang is designed for the efficient programming of LLMs and incorporates\nprimitives for common LLM programming patterns. We have implemented SGLang as a\ndomain-specific language embedded in Python, and we developed an interpreter, a\ncompiler, and a high-performance runtime for SGLang. These components work\ntogether to enable optimizations such as parallelism, batching, caching,\nsharing, and other compilation techniques. Additionally, we propose\nRadixAttention, a novel technique that maintains a Least Recently Used (LRU)\ncache of the Key-Value (KV) cache for all requests in a radix tree, enabling\nautomatic KV cache reuse across multiple generation calls at runtime. SGLang\nsimplifies the writing of LLM programs and boosts execution efficiency. Our\nexperiments demonstrate that SGLang can speed up common LLM tasks by up to 5x,\nwhile reducing code complexity and enhancing control.", + "authors": "Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff Huang, Chuyue Sun, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, Ying Sheng", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.PL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.17323v2", + "title": "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", + "abstract": "Generative Pre-trained Transformer models, known as GPT or OPT, set\nthemselves apart through breakthrough performance across complex language\nmodelling tasks, but also by their extremely high computational and storage\ncosts. Specifically, due to their massive size, even inference for large,\nhighly-accurate GPT models may require multiple performant GPUs, which limits\nthe usability of such models. While there is emerging work on relieving this\npressure via model compression, the applicability and performance of existing\ncompression techniques is limited by the scale and complexity of GPT models. In\nthis paper, we address this challenge, and propose GPTQ, a new one-shot weight\nquantization method based on approximate second-order information, that is both\nhighly-accurate and highly-efficient. Specifically, GPTQ can quantize GPT\nmodels with 175 billion parameters in approximately four GPU hours, reducing\nthe bitwidth down to 3 or 4 bits per weight, with negligible accuracy\ndegradation relative to the uncompressed baseline. Our method more than doubles\nthe compression gains relative to previously-proposed one-shot quantization\nmethods, preserving accuracy, allowing us for the first time to execute an 175\nbillion-parameter model inside a single GPU for generative inference. Moreover,\nwe also show that our method can still provide reasonable accuracy in the\nextreme quantization regime, in which weights are quantized to 2-bit or even\nternary quantization levels. We show experimentally that these improvements can\nbe leveraged for end-to-end inference speedups over FP16, of around 3.25x when\nusing high-end GPUs (NVIDIA A100) and 4.5x when using more cost-effective ones\n(NVIDIA A6000). The implementation is available at\nhttps://github.com/IST-DASLab/gptq.", + "authors": "Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh", + "published": "2022-10-31", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.11514v2", + "title": "HexGen: Generative Inference of Large-Scale Foundation Model over Heterogeneous Decentralized Environment", + "abstract": "Serving generative inference of the large-scale foundation model is a crucial\ncomponent of contemporary AI applications. This paper focuses on deploying such\nservices in a heterogeneous and decentralized setting to mitigate the\nsubstantial inference costs typically associated with centralized data centers.\nTowards this end, we propose HexGen, a flexible distributed inference engine\nthat uniquely supports the asymmetric partition of generative inference\ncomputations over both tensor model parallelism and pipeline parallelism and\nallows for effective deployment across diverse GPUs interconnected by a fully\nheterogeneous network. We further propose a sophisticated scheduling algorithm\ngrounded in constrained optimization that can adaptively assign asymmetric\ninference computation across the GPUs to fulfill inference requests while\nmaintaining acceptable latency levels. We conduct an extensive evaluation to\nverify the efficiency of HexGen by serving the state-of-the-art Llama-2 (70B)\nmodel. The results suggest that HexGen can choose to achieve up to 2.3 times\nlower latency deadlines or tolerate up to 4 times more request rates compared\nwith the homogeneous baseline given the same budget.", + "authors": "Youhe Jiang, Ran Yan, Xiaozhe Yao, Yang Zhou, Beidi Chen, Binhang Yuan", + "published": "2023-11-20", + "updated": "2024-02-04", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.06180v1", + "title": "Efficient Memory Management for Large Language Model Serving with PagedAttention", + "abstract": "High throughput serving of large language models (LLMs) requires batching\nsufficiently many requests at a time. However, existing systems struggle\nbecause the key-value cache (KV cache) memory for each request is huge and\ngrows and shrinks dynamically. When managed inefficiently, this memory can be\nsignificantly wasted by fragmentation and redundant duplication, limiting the\nbatch size. To address this problem, we propose PagedAttention, an attention\nalgorithm inspired by the classical virtual memory and paging techniques in\noperating systems. On top of it, we build vLLM, an LLM serving system that\nachieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV\ncache within and across requests to further reduce memory usage. Our\nevaluations show that vLLM improves the throughput of popular LLMs by\n2-4$\\times$ with the same level of latency compared to the state-of-the-art\nsystems, such as FasterTransformer and Orca. The improvement is more pronounced\nwith longer sequences, larger models, and more complex decoding algorithms.\nvLLM's source code is publicly available at\nhttps://github.com/vllm-project/vllm", + "authors": "Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, Ion Stoica", + "published": "2023-09-12", + "updated": "2023-09-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.14453v1", + "title": "EPI-SQL: Enhancing Text-to-SQL Translation with Error-Prevention Instructions", + "abstract": "The conversion of natural language queries into SQL queries, known as\nText-to-SQL, is a critical yet challenging task. This paper introduces EPI-SQL,\na novel methodological framework leveraging Large Language Models (LLMs) to\nenhance the performance of Text-to-SQL tasks. EPI-SQL operates through a\nfour-step process. Initially, the method involves gathering instances from the\nSpider dataset on which LLMs are prone to failure. These instances are then\nutilized to generate general error-prevention instructions (EPIs).\nSubsequently, LLMs craft contextualized EPIs tailored to the specific context\nof the current task. Finally, these context-specific EPIs are incorporated into\nthe prompt used for SQL generation. EPI-SQL is distinguished in that it\nprovides task-specific guidance, enabling the model to circumvent potential\nerrors for the task at hand. Notably, the methodology rivals the performance of\nadvanced few-shot methods despite being a zero-shot approach. An empirical\nassessment using the Spider benchmark reveals that EPI-SQL achieves an\nexecution accuracy of 85.1\\%, underscoring its effectiveness in generating\naccurate SQL queries through LLMs. The findings indicate a promising direction\nfor future research, i.e. enhancing instructions with task-specific and\ncontextualized rules, for boosting LLMs' performance in NLP tasks.", + "authors": "Xiping Liu, Zhao Tan", + "published": "2024-04-21", + "updated": "2024-04-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.DB" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Text-to-SQL aims to simplify the process of accessing data in relational databases for non-expert users. Researchers have made impressive achievements in this task by designing models (Wang et al., 2020; Cai et al., 2021; Li et al., 2023b; Qi et al., 2022; Li et al., 2023a) or fine-tuning pre-trained models (Yu et al., 2020; Shi et al., 2021; Scholak et al., 2021). LLMs have demonstrated impressive code generation abilities without fine-tuning (Chen et al., 2021; Chowdhery et al., 2022; Zhao et al., 2022; Athiwaratkun et al., 2022). A series of research studies have been done to investigate the capacity of LLMs on Text-to-SQL. (Rajkumar et al., 2022; Liu et al., 2023) studied the efficacy of Text-toSQL on various LLMs. They explored the impact of prompt structure, number of few-shot demonstrations, and other factors on the outcomes using zero-shot and few-shot prompting. The rapid development of prompting-based methods has led to the proposal of numerous effective prompting principles. For example, CoT prompting (Kojima et al., 2022) is proposed to improve LLMs\u2019 reasoning ability by producing intermediate steps before predicting a final answer; Self-Consistency (Wang et al., 2022) mitigates the phenomenon of randomness in the output of LLMs by voting on the diversity of results and selecting the best one. For Text-to-SQL, these prompting enhancement methods are equally effective. SelfDebug(Chen et al., 2023) employing CoT prompting to obtain the question explanation and generates the initial SQL, then instruct LLMs to debug the SQL. Coder-Reviewer (Zhang et al., 2022), MBEExec (Shi et al., 2022) and LEVER (Ni et al., 2023) utilizing consistency principles to choose the optimal one from multiple candidate results. MBEExec (Shi et al., 2022) selects the SQL with the most common execution result, Coder-Reviewer (Zhang et al., 2022) selects the SQL considering both the likelihood of the predicted SQL; LEVER (Ni et al., 2023) selects the SQL with the highest score, which represents the probability that the SQL is correct and is calculated based on the question, the SQL and the execution results. LLMs-based Text-to-SQL methods include pipelined, single-round, and multi-round methods. Pipeline methods improve performance by decomposing a Text-to-SQL task to reduce its complexity. DIN-SQL (Pourreza and Rafiei, 2023) breaks down the task into four sub-tasks and applies fewshot learning, while C3-prompt (Dong et al., 2023) divides it into two parts using zero-shot learning. Single-turn approaches focus on input representation and demonstration selection, with DPS-prompt (Nan et al., 2023) using a similarity-diversity sampling strategy for demonstration selection. Multiround methods involve interaction with external information. Self-debug (Chen et al., 2023), for example, continually makes new demands on the output of the previous round to improve performance. However, these methods have not considered the potential of instructions in Text-to-SQL.", + "pre_questions": [], + "main_content": "Introduction Text-to-SQL is a task in natural language processing (NLP) that aims to automatically generate structured query language (SQL) queries from natural language text. This task enables users to access databases without requiring SQL knowledge or familiarity with the database schema, thus facilitating the work of data analysts and software developers who need to write complex SQL queries. Text-toSQL has attracted significant interest from both industry and academia in recent years (Wang et al., 2020; Choi et al., 2021; Zhao et al., 2022). With the rapid progress of Large Language Models (LLMs), the research areas of NLP are being revolutionized (Zhao et al., 2023). LLMs can now serve as a general-purpose language task solver (to some extent), and they have shown impressive performance in a series of NLP tasks, e.g., arithmetics, symbolic reasoning (Kojima et al., 2022), disambiguation QA, movie recommendation, etc. (Suzgun et al., 2022). Numerous studies have explored the application of LLMs for Text-to-SQL, and a number of LLM-based methods have been proposed (Rajkumar et al., 2022; Liu et al., 2023; Pourreza and Rafiei, 2023; Gao et al., 2023). These methods fall into two categories: zero-shot and fewshot. In zero-shot settings, LLMs are tasked with generating SQL queries based solely on taskdescriptive instructions. Conversely, few-shot prompting involves providing LLMs with a small number of demonstrations, facilitating task completion through In-Context Learning (ICL) (Brown et al., 2020; Radford et al., 2019). The few-shot approach has attracted considerable attention due to its ability to provide models with additional context through the use of examples (Chen et al., 2023; Pourreza and Rafiei, 2023; Ni et al., 2023; Liu et al., 2022; Nan et al., 2023; Guo et al., 2023; Gao et al., 2023). In contrast, the zero-shot approach has not received as much research interest. We believe that the zero-shot approach remains underexplored and holds significant untapped potential. Therefore, the focus of this paper is to enrich zero-shot prompts with specially designed instructions, aiming to unlock the full potential of this approach. Typically, an instruction within the context of a Text-to-SQL task is simply a sentence that outlines the task, such as \"Please translate this question into a SQL query.\" However, such instructions are suboptimal for two primary reasons: (1) Limited information. Traditional instructions merely activate arXiv:2404.14453v1 [cs.CL] 21 Apr 2024 Figure 1: An example of EPI and answers generated by EPI-SQL. The orange line represents the connection of EPI and potential errors, and the green line indicates the connection of correct answer and EPI. the model\u2019s Text-to-SQL capabilities without providing additional useful information, thereby only scratching the surface of LLMs\u2019 capabilities; (2) Lack of context awareness. The same instruction is applied to all questions without adaptation to the varying context of different questions. It is more desirable to provide instructions that are tailored to individual questions. To address these issues, this paper proposes to enhance zero-shot prompts with Error-Prevention Instructions (EPIs). As shown in Figure 1, the EPI is designed to furnish precise and valuable information for each Text-to-SQL task. In comparison to traditional instructions, the EPI provides comprehensive information that includes accurate guidance for the current task while also prompting LLMs to avoid potential errors. With EPI, we introduce EPI-SQL, a novel zeroshot method for Text-to-SQL. The basic idea behind EPI-SQL is to derive insights from past mistakes. Through an analysis of Text-to-SQL outcomes on LLMs, we observed that a LLM may make similar mistakes when encountering comparable questions or fail under analogous circumstances. To effectively leverage these errors, we initially compiled a set of error-prone instances in the Text-to-SQL task. Then, a number of prevention rules, i.e., EPIs, are drawn from these instances. When presented with a question, the method finds the EPIs most relevant to the current question, synthesizes the EPIs and proposes a contextualized EPI that cater to the context of the current task. This method enables the LLM to anticipate and avoid potential errors, ultimately leading to improved results. Through experimentation, we have demonstrated the effectiveness of the EPI-SQL. On the Spider dataset (Yu et al., 2018), EPI-SQL achieved an execution accuracy of 85.1% and a test suite accuracy of 77.9%, outperforming the existing state-of-theart systems. It is worth noting that these results were obtained in a zero-shot scenario, underscoring the substantial potential of instruction-enhancing techniques, which has often been overlooked in previous research. In summary, we make the following contributions in this work: \u2022 We present an enhancement to zero-shot prompts through the incorporation of ErrorPrevention Instructions (EPIs). The EPI provides comprehensive information that delivers precise guidance for the current task while simultaneously prompting large language models to circumvent potential errors. \u2022 We present EPI-SQL, an innovative approach to Text-to-SQL tasks that incorporates EPIs. This method enables LLMs to learn not only from past errors but also to proactively anticipate potential future errors, ultimately enhancing overall performance results. \u2022 We conduct a comprehensive set of experiments and find that EPIs are highly beneficial for the Text-to-SQL task. In fact, our method achieves high performance in a zero-shot setting, demonstrating the significant potential of EPIs and instruction-enhancing techniques. 2 Methodology 2.1 Problem Description Text-to-SQL is a task that maps a natural language question Q to a SQL query given a database schema. In this work, we investigate the use of LLMs for Text-to-SQL. Prompts in LLMs-based applications act as the essential input that steers the model towards producing contextually relevant and accurate outputs. A basic strategy of prompting an LLM is zero-shot prompting, where the model Figure 2: The framework of our method. is provided with a task description, also known as instruction, I, and expected to generate the desired output. An LLM M estimates a probability distribution over SQL queries y, allowing us to generate queries token by token. Thus, the zero-shot Textto-SQL generation with LLMs can be formulated as follows: PM(y | x) = Q|y| i=1 PM (yi | prompt (Q, D, I) , y