diff --git "a/related_53K/test_related_long_2404.17729v1.json" "b/related_53K/test_related_long_2404.17729v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.17729v1.json" @@ -0,0 +1,8616 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17729v1", + "title": "CoMM: Collaborative Multi-Agent, Multi-Reasoning-Path Prompting for Complex Problem Solving", + "abstract": "Large Language Models (LLMs) have shown great ability in solving traditional\nnatural language tasks and elementary reasoning tasks with appropriate\nprompting techniques. However, their ability is still limited in solving\ncomplicated science problems. In this work, we aim to push the upper bound of\nthe reasoning capability of LLMs by proposing a collaborative multi-agent,\nmulti-reasoning-path (CoMM) prompting framework. Specifically, we prompt LLMs\nto play different roles in a problem-solving team, and encourage different\nrole-play agents to collaboratively solve the target task. In particular, we\ndiscover that applying different reasoning paths for different roles is an\neffective strategy to implement few-shot prompting approaches in the\nmulti-agent scenarios. Empirical results demonstrate the effectiveness of the\nproposed methods on two college-level science problems over competitive\nbaselines. Our further analysis shows the necessity of prompting LLMs to play\ndifferent roles or experts independently. We release the code at:\nhttps://github.com/amazon-science/comm-prompt", + "authors": "Pei Chen, Boran Han, Shuai Zhang", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "LLMs have shown remarkable proficiency in solving many downstream tasks (Qu et al., 2020b; Chen et al., 2021; Xu et al., 2024c,b), paving the way towards Artificial General Intelligence. With the advent of GPT-3 (Brown et al., 2020) and its emergent abilities (Wei et al., 2022a) in solving downstream tasks on both zero-shot and few-shot settings, many decoder-only LLMs follow (Ling et al., 2023b), such as PaLM (Chowdhery et al., 2022), LLaMA (Brown et al., 2020; OpenAI, 2023), BLOOM (Workshop et al., 2023), Claude (Bai et al., 2022), OPT (Zhang et al., 2022), Mistral (Jiang et al., 2023), Falcon (Penedo et al., 2023) etc. Considering the inference speed and economic expenditure, we choose GPT-3.5 as the backbone model for all the baselines and our CoMM approach. In order to unlock the potential of the LLMs in solving downstream tasks (Yi and Qu, 2022; Chen et al., 2022; Qu et al., 2020a; Zhang et al., 2023; Yu et al., 2024; Xu et al., 2024a), many prompting approaches arise, exempting from manipulating the billion-level parameters (Li et al., 2023c). Among these prompting methods, ordinary prompting methods follow Brown et al. (2020) and employ task descriptions and sample demonstrations (fewshot) as the prompts for downstream tasks. To alleviate the difficulty of directly outputting the answer for LLMs, many prompting methods simplify the process by predicting the middle reasoning steps (chain-of-thought (Wei et al., 2022b)) or answering the decomposed sub-problems first (Wang et al., 2023b; Yao et al., 2023; Hao et al., 2023; Zhou et al., 2023; Ling et al., 2024). To overcome the lack of computing ability and outdated knowledge base, some work prompt LLMs to utilize external tools (Gao et al., 2023; Chen et al., 2023). To further unlock the ability of LLMs in solving complicated problems, agent-based methods that prompt LLMs to play specific roles trend. Among them, singe-agent methods only use one instance of LLMs. ExpertPrompt (Xu et al., 2023) prompts Step 1: Explain the environment and task as the system message prompt !!= \" !(!). Step 2: Prompt the three agents % \"(&), %#(&), %$ & to play the domain experts and summarizer, with their role name, responsibility and principles. Step 3: Each role will respond based on the current discussions, and when necessary, multi-turn dialogues are facilitated. Play Roles Response I solve the problem as a physicist, with physical reasoning path. I solve the problems as a mathematician, with mathematical reasoning path. I summarize. Answer Given a task ' Figure 2: Overall Framework of CoMM: An Example from College Physics with the Few-shot Setting. an LLM to play as a domain expert and successfully elicits the LLM to answer domain questions. EmotionPrompt (Li et al., 2023a) improves the performance of agents with emotional prompts. Huang et al. (2022); Shinn et al. (2023); Madaan et al. (2023) prompts LLMs to do self-reflection or self-refinement to correct the mistakes. Wang et al. (2023a); Sun et al. (2023) prompts LLMs to do planning before solving a specific task. Wang et al. (2023c) prompts a single agent to play multiple roles with different personas. Another branch of agent-based approaches are with multi-agents. For example, Liang et al. (2023); Chan et al. (2023); Du et al. (2023) prompt LLMs to play different roles in debating for problemsolving. ChatEval (Chan et al., 2023) uses multiple agents debating for automatic LLM evaluation. MathChat (Wu et al., 2023b) proposed a conversational framework to solve math problems with the user and LLM agent\u2019s interactions. Park et al. (2023) and Li et al. (2023b) prompts LLMs to play as different agents for simulating human behaviors. Our work is closely related to these works, but our aim is to prompt LLMs to play different domain experts in a collaborative framework on complicated reasoning problems, and how to embed the few-shot examples into the multi-agent framework. Along with the agent-based prompting methods, many open-sourced applications come out. For example, AutoGPT (Wu et al., 2023a) plays as AI agents that will attempt to achieve a given goal by breaking it into sub-tasks and using the internet and other tools in an automatic loop. AutoGen (Wu et al., 2023a) designs a framework for building LLM applications based on multi-agent conversations. MetaGPT (Hong et al., 2023) prompts multi-agent to play product managers, architects, project managers, and engineers for a software project. SkyAGI (Park et al., 2023) emerges humanbehavior simulation capability in LLM. While sharing the same multi-agent framework, our work focuses on exploring the effectiveness of the framework, i.e., we aim to answer whether multi-agent is necessary and how to prompt multiple agents to work collaboratively.", + "pre_questions": [], + "main_content": "Introduction Large Language Models (LLMs) such as GPT (Brown et al., 2020; OpenAI, 2023), LLaMA (Touvron et al., 2023a,b) and PaLM (Chowdhery et al., 2022), have shown remarkable proficiency in solving many downstream tasks (Liu et al., 2021), without furthering fine-tuning the model parameters. However, their ability is limited to solving reasoning and mathematical problems (Wei et al., 2022b), especially complicated science problems (Ma et al., 2023; Xu et al., 2023; Ling et al., 2023a). In consideration of this limitation, and the costly fine-tuning overhead of the LLMs with billion-level parameters, many \u2217Work done as an intern at Amazon Web Services. \u2020Corresponding author. prompting methods emerge, i.e., the process of carefully crafting input queries to effectively communicate with LLMs and obtain desired outputs. Apart from the benefit of exempting from manipulating the parameters of the LLMs, these prompting methods seamlessly integrate the pre-trained models into downstream tasks by eliciting desired model behaviors (Sahoo et al., 2024). Among these endeavored prompting approaches, some of them prompt LLMs to reason with multiple middle-steps or subproblems for reasoning tasks (Wei et al., 2022b; Wang et al., 2023b; Yao et al., 2023; Hao et al., 2023; Zhou et al., 2023), some of them prompt LLMs to take advantage of external tools (Gao et al., 2023; Chen et al., 2023). Despite that these methods have further pushed the upper-bound reasoning and math abilities of LLMs, their potential for complicated science problems is still awaiting further improvement. Take the college-level physical problem in Figure 1 as an example, we can observe that the Chain-of-thought prompting method still suffers from knowledge and factual errors (incorrect equations used for the given problem) and computing errors (incorrect calculations). Recently, agent-based prompting methods that prompt an LLM to play a specific role or act as an intelligent agent further unlock the ability of LLMs to solve complicated problems. For example, Xu et al. (2023) prompt an LLM to play as a domain expert and successfully elicits the LLM to answer domain questions. Huang et al. (2022); Shinn et al. (2023); Madaan et al. (2023) prompt LLMs to do self-reflection or self-refinement to correct the mistakes. Wang et al. (2023a); Sun et al. (2023) prompt LLMs to do planning before solving a specific task. Wang et al. (2023c) prompts a single agent to play multiple roles with different personas, and Liang et al. (2023); Chan et al. (2023); Du et al. (2023) prompt LLMs to play different roles in debating arXiv:2404.17729v1 [cs.CL] 26 Apr 2024 Question: Light from a laser falls on a pair of very narrow slits separated by 0.5 micrometer, and bright fringes separated by 1.0 millimeter are observed on a distant screen. If the frequency of the laser light is doubled, what will be the separation of the bright fringes? Correct Answer: 0.5 mm Complex Science Problem Answer: Let\u2019s think step-by-step. The separation of the bright fringes in a double-slit interference pattern is given by the equation: ' \u2217sin(-) = 0 \u22171 ' \u2217sin(-) = 0 \u22171 where ' is the separation between the slits, is the angle of the fringe, 0 is the order of the fringe, and 1 is the wavelength of the light. In this case, the separation between the slits is given as 0.5 micrometers. For the original frequency of the laser light, the separation of the bright fringes is given by: 0.5 00 \u2217sin(-) = 0 \u22171 Now, if the frequency of the laser light is doubled, the wavelength of the light will be halved. So the new separation of the bright fringes will be: 0.5 00 \u2217sin(-) = 0 \u22171/2 0.5 00 \u2217sin(-) = 0 \u22171/2 Simplifying this equation, we find: 0.25 00 \u2217sin(-) = 0 \u22171 So, the separation of the bright fringes is 0.25 mm. Few-shot Chain-of-thought (CoT) Response Figure 1: Chain-of-thought still makes Knowledge and Computing Errors in Solving the Complex Science Problem. for problem-solving. Following these works, we propose a collaborative multi-agent framework (CoMM) that prompts LLMs to play different roles with different domain knowledge or task-solving duties for problemsolving. In particular, we propose a multi-path reasoning method that enables few-shot learning in the multi-agent framework. Empirical results on multiple complicated college-level science problems show that our method significantly outperforms strong baselines. Our further analysis shows that it is beneficial to include multiple agents for the collaboration, instead of prompting one agent to play multiple roles altogether. In this section, we first formally define the singleagent prompting framework, and then introduce the formal definition of the multi-agent prompting framework, and its adaptions to both zero-shot and few-shot settings (CoMM). Single-agent Prompting Given a language model P(\u03b8) and input text x, single-agent prompting takes a function that is applied to the input text x\u2032 = fprompt(x) (usually defines the target problem or task) and then predict the answer y by the language model that plays as a single problemsolving agent P(y|x\u2032; \u03b8). In the zero-shot setting, the prompting function f does not contain any demonstration examples, while in the few-shot setting, the prompting function contains a few examples. Multi-agent Prompting For multi-agent prompting, we will have n language models P1(\u03b81), P2(\u03b82), ... , Pn(\u03b8n) that play different agents or roles in the framework. These language models can be the same (\u03b81 = \u03b82... = \u03b8n) or different (\u03b81! = \u03b82...! = \u03b8n). For input text x, each agent i will have its own prompting functions fi prompt(x) that formats the input task or problem for the agent. We define the interactions of these agents as a non-parametric function \u03d5(y|g1, g2, ..., gn) where gi = Pi(yi|fi prompt(x); \u03b8i) and yi is the output from agent i and y is the final answer. Collaborative Zero-shot Scenario In our collaborative multi-agent setting, we restrict the multiple agents to inherit from the same language models and the count of agents to be three. Then we have three language models P1(\u03b8), P2(\u03b8), P3(\u03b8) as the agents: P1(\u03b8) and P2(\u03b8) as the problem-solving experts and P3(\u03b8) as the summarizar, as shown in Figure 2. Specifically, for a given input problem x, we use a prompt function to turn it into a system message that defines the collaborative team-working environment xs = fs(x). For each agent, we define prompting functions to characterize its role and prompt it to give its solution accordingly. In particular, for the first expert agent, the prompting function formats the problem and the system message as x1 = f1(x, xs), and then gives its output P1(y1|x1; \u03b8). For the second expert agent, the prompting function formats the problem, the system message, and the output from y1 as x2 = f2(x, xs, y1), and then give its output P1(y1|x1; \u03b8). For the third summarizer, the prompting function will also consider the outputs from the two experts x3 = f3(x, xs, y1, y2) and then the agent gives the final answer P3(y|x3; \u03b8). For certain specific input tasks, multi-turn discussions are necessary. In this case, the output of the second expert agent will circulate back to the first agent as the input prompt again, and then repeat the afore-mentioned discussions, as demonstrated in the Figure 2. Collaborative Few-shot Scenario In a multiagent setting, it is not trivial to add the few-shot examples to the various agents. Which agent should we give the few-shot examples? We adopt a multi-path reasoning approach that gives the fewshot examples to the different agents. In particular, different agents will have their own expertisebased reasoning path in the few-shot demonstrations. Formally, the two expert prompting functions x1 = f1(x, xs, e1) and x2 = f2(x, xs, e2, y1) will take exemplars e1 and e2 as inputs. Take Figure 2 as an example, the few-shot examples will be added to both the physicist and the mathematician agents, but with different reasoning paths. More details can be found in the Appendix A. 4 Experiments In this section, we will first introduce the evaluation datasets and benchmark that focus on complicated science problems. After that, we introduce the strong baseline prompting methods for comparison. At last, we introduce the results of our methods and the baselines on the benchmark. 4.1 Datasets College Physics is a dataset from Massive Multitask Language Understanding (MMLU), which covers 57 subjects across different domain knowledge. It focuses on college-level physics problems. These problems are still very challenging and far from satisfying performance with large language models. Like the example from Figure 3, LLMs are still suffering from the lack of knowledge and computing ability. Moral Scenarios is aother dataset from MMLU (Hendrycks et al., 2020). Moral Scenarios focus on advanced professional-level social science problems that are yet challenging for large language models, which is among the worst performing tasks for many language models (Ma et al., 2023). Both datasets are multiple choice questions, and we use the correct rate (Accuracy) as the metric for comparison. 4.2 Baselines Standard (Brown et al., 2020) is the first work that introduced performing tasks without any taskspecific training or examples, relying solely on its general pre-training with prompting. In this work, we format each problem as \"Q: {question} A:\" at zero-shot settings, and as \"Q: {question example 1)} A: {answer example 1} ... Q: {question example n} A: {answer example n} Q: {question)} A:\" for the few-shot setting with n demonstration examples. Chain-of-thought (CoT) (Wei et al., 2022b) improves the Standard prompting approaches by introducing a series of intermediate natural language reasoning steps that lead to the final output (chain of thought). It hypothesize that giving the LLMs longer predicting window, they have better chance to reach the answer, in comparison with directly requiring them to output the answer. For zero-shot implementation, we follow the Zero-shot-CoT proposed by Wang et al. (2023a), and add \"Let\u2019s think Prompting Methods Zero-shot Few-shot Moral Scenarios College Physics Moral Scenarios College Physics Standard (Brown et al., 2020) 38.65 44.12 38.21 48.04 CoT (Wei et al., 2022b) 45.58 50.00 64.92 56.86 Thought (Ma et al., 2023) 49.39 56.42 CoMM 52.17 (+ 2.78) 54.90 (+ 4.90) 65.03 (+ 0.11) 64.71 (+ 7.85) Table 1: Main Test Results (Accuracy, %). Numbers in the parentheses are performance gains of the CoMM over previous state-of-the-art. step by step\" prompt before the answer, i.e., \"Q: {question} A: Let\u2019s think step by step.\". As for the few-shot implementation, we follow the indigenous settings from Wei et al. (2022b), i.e., \"Q: {question example 1} A: Let\u2019s think step by step. {answer example 1 with chain of thought} ... Q: {question example n} A: Let\u2019s think step by step. {answer example n with chain of thought} Q: {question} A: Let\u2019s think step by step.\" for the few-shot setting with n demonstration examples. Thought Experiment (Thought) (Ma et al., 2023) is a reasoning framework that is specialized in better moral reasoning by using counterfactual reasoning. It is a multi-agent framework with multistep prompting, and each step involves prompting the LLMs to solve a specific task. Specifically, this method involves employing counterfactual thinking to envision various, often hypothetical, situations, and then deliberating on the consequences of these imagined circumstances. By processing these scenarios, it aids in consolidating intermediate reflections, thereby leading to a deeper comprehension of the issue at hand and guiding towards the most appropriate solution. We adopt the same settings for both zero-shot and few-shot as provided by the Ma et al. (2023). 4.3 Settings Backbone Model For a fair comparison, we use gpt-3.5-turbo-06131 as the backbone model, and set the temperature to be 0 in all our experiments. Settings for College Physics We prompt the first agent P1(\u03b8) to be a physicist, the second agent P2(\u03b8) to be a mathematician, and the third agent P3(\u03b8) to be the summarizer. In the zero-shot setting, we do not provide demonstration examples, 1https://openai.com/ while in the few-shot setting, we give the same 5 examples for the two experts, but with different reasoning paths, i.e., the reasoning path of a physicist role and the reasoning path of a mathematician role individually. We only prompt the group to discuss once for this benchmark. More details can be found in the Appendix A. Settings for Moreal Scenarios In the zero-shot setting, we prompt the first agent P1(\u03b8) to be a task decomposer, the second agent P2(\u03b8) to be a subproblem solver, and the third agent P3(\u03b8) to be the summarize. In the few-shot setting, we also give each expert 5 examples, and we prompt the first agent P1(\u03b8) to be a chain-of-thought reasoner with CoT reasoning path, the second agent P2(\u03b8) to be a Thought reasoner with thought experiment path, and the third agent P3(\u03b8) to be the summarize. We prompt the group to discuss twice for this benchmark. More details can be found in the Appendix A. 4.4 Main Results The main experimental results are shown in Table 1. It is saliently observable that the proposed CoMM approach can outperform the state-of-theart baselines on both zero-shot and few-shot settings. In detail, it improves with absolute average improvements of 3.84% at zero-shot setting and 8.23% at few-shot setting. CoMM improves more in few-shot settings, further demonstrating the effectiveness of applying the multi-path reasoning approaches in the multi-agent framework. Also, CoMM improves more on the complicated College Physics dataset that requires more domain knowledge, further showcasing the efficacy of CoMM in solving complex problems. Benchmark Settings Single Agent Multiple Agents Moral Scenarios Zero-shot 27.71 52.17 (+24.46) Few-shot 42.68 65.03 (+22.35) College Physics Zero-shot 42.16 54.90 (+12.74) Few-shot 56.86 64.71 (+07.85) Table 2: Single Agent v.s. Multiple Agents, (Accuracy, %). Numbers in the parentheses are the performance gains. Settings Zero-shot Few-shot CoT (Wei et al., 2022b) 50.00 56.86 One Physicist Only 47.06 44.12 One Mathematician Only 42.16 58.82 Two Physicists 47.05 50.98 Two Mathematicians 52.94 59.80 Both Experts (CoMM) 54.90 (+1.96) 64.71 (+4.91) Table 3: Single Expert v.s. Multiple Experts on College Physics, (Accuracy, %). Numbers in the parentheses are the performance gains. 5 Analysis In this section, this work will demonstrate the necessity of multiple \"multiples\": multiple agents, multiple experts, multiple path reasoning, and multiple turns discussions with empirical evidence. 5.1 Are Multiple Independent Agents Necessary? Our proposed CoMM approach prompts multiple instances of LLMs to play different agents. But why not prompt one single instance of LLMs to play different roles altogether to solve the target problem? This is similar to the multi-agent framework proposed by Wang et al. (2023c). We experiment with the same prompting text of CoMM using a single instance of LLMs, and the results are shown in Table 2. Apparently, the performance of multiple agents (CoMM) significantly outperforms the single-agent approach, across all benchmarks and settings. We hypothesize the possible reason is that a single instance of LLMs tends to be self-consistent, and prompting it to switch among different roles confuses the model to make the right predictions. Our results are in line with the findings from Xu et al. (2023). 5.2 Are Multiple Domain Experts Necessary? In the benchmark of College Physics, we prompt the LLMs to play two experts: one physicist and one mathematician, aiming at utilizing their domain knowledge independently in solving the problem collaboratively and complementarily. We hope the physicist agent can elicit the domain knowledge of physics and the mathematician agent can overcome the computing errors. Here we empirically demonstrate whether the multiple domain experts are collaborating. As shown in Table 3, the single-expert approach shows poor performance, and could not beat the CoT benchmark. Furthermore, we prompt the LLMs to play multiple experts but with the same expertise. The results shown in Table 3 demonstrate that such settings will improve over single-expert cases, but still under-perform over the multiple different experts settings. Overall, the results empirically demonstrate the necessity and efficacy of the multiple-expert collaborative framework. 5.3 Are Multiple Turns Discussions Necessary? As mentioned in Section 3, our proposed CoMM framework supports multiple turns discussion, which means that the agents can discuss multiple times to reach a final answer. So are multipleturn discussions necessary? We experiment on the benchmark with one-turn discussions and two-turn discussions, as shown in Table 4. It turns out that the turns of discussions depend on the benchmark or dataset. For the Moral SceBenchmark Settings One Turn (Acc%) Two Turns (Acc%) Moral Scenarios Zero-shot 48.27 52.17 Few-shot 64.92 65.03 College Physics Zero-shot 54.90 45.09 Few-shot 64.71 55.88 Table 4: Single Turn v.s. Multiple Turns. narios dataset, two-turn discussions perform better, while the one-turn discussion performs better on the College Physics dataset. We hypothesize that the possible rationale could be that the rigorous science problems in College Physics require the reasoning paths to be concise and clear. A plethora of discussions may bring confusion and hallucinations that are not beneficial to such problems. However, the social science problems from the Moral Scenarios benchmark naturally benefit from thorough discussions, as many of the truths from social science originated from human discussions to reach a consensus. 6 Case Study In this Section, we provide case study of CoMM on different benchmarks. In Figure 3, we show the reasoning results of the same problem from Figure 1 on College Physics, in which the CoT approach is suffering from incorrect knowledge and incorrect calculations. In our CoMM framework, the physicist agent also makes calculation errors. However, the error is corrected by the mathematician, and finally the team reaches a correct answer. We can observe that our framework successfully elicit the domain physics knowledge from the physicist agent and the calculation ability from the mathematician agent. Also, the three agents closely collaborated with each other, and the physicist provide domain knowledge, the mathematician take advantage of the knowledge and correct the calculations, and the summerizer gives the final answer based on the discussions. In Figure 4, we delve into a detailed case study conducted on the Moral Scenarios, specifically focusing on a few-shot learning setting. Notably, the CoT baseline encounters difficulties in correctly identifying the moral correctness of the action \"trash dog\u2019s poop while walking.\" This limitation highlights the challenges faced by LLMs in understanding and judging moral scenarios. Our model, the CoMM, introduces a more layered approach to this issue. In the initial phase of our study, both the CoT reasoner and the Thought reasoner within CoMM failed to accurately assess the moral aspect of the scenario. This initial failure underscores the intricacies of moral reasoning and the need for more advanced reasoning capabilities in the models. However, the scenario undergoes a significant shift following the first round of discussion within CoMM. After this collaborative reflection, both reasoners \u2013 the CoT and the Thought \u2013 adjust their initial assessments and correctly identify the scenario as morally correct. Additionally, the summarizer in CoMM plays a crucial role at this point, affirming the correct conclusion reached by the reasoners and providing a definitive final answer that aligns with their revised understanding. This case study effectively showcases the initial challenges and the eventual success of CoMM in moral reasoning, emphasizing the value of iterative and collaborative reasoning LLM agents. 7 Conclusion This study underscores the significant strides made in enhancing the reasoning capabilities of LLMs through the innovative CoMM prompting framework. By leveraging a multi-agent, multi-reasoning-path approach, we successfully prompted LLMs to assume varied roles within a problem-solving team, fostering a collaborative environment crucial for tackling complex science problems. The empirical results obtained from two college-level science tasks not only validate the efficacy of our method but also highlight the potential of few-shot prompting in multi-agent contexts. More importantly, our analysis reveals the indispensable role of distinct role-play in achieving more nuanced and sophisticated problem-solving strategies. This research paves the way for future explorations into the realm of advanced AI reasoning, particularly in the application of LLMs to intricate and specialized tasks. Physicist: The separation of the bright fringes is determined by the wavelength of the light and the distance between the slits. The formula for the separation of the bright fringes is given by: !\"#$%$&'() (* +%',\u210e& *%'),\". = (1$2\"3\"),&\u210e\u22175'.&$)6\" &( &\u210e\" .6%\"\")) / 5'.&$)6\" +\"&1\"\") &\u210e\" .3'&. In this case, the separation of the bright fringes is given as 1.0 millimeter. Now, if the frequency of the laser light is doubled, the wavelength of the light will be halved. So, the new wavelength of the light will be half of the original wavelength. Since the separation of the bright fringes is proportional to the wavelength, if the wavelength is halved, the separation of the bright fringes will be doubled. Therefore, the separation of the bright fringes will be 2.0 millimeters. Mathematician: Let's denote the original separation of the slits as 51 = 0.5 micrometers. The separation of the bright fringes on the screen is given as .1 = 1.0 millimeter. Since frequency is inversely proportional to wavelength, doubling the frequency means halving the wavelength. Let's denote the new separation of the slits as 52 = 51. The new separation of the bright fringes on the screen is denoted as .2. Using the formula from the physicist for interference in waves, we have: .1 = >! \u2217?/51 s2 = >\" \u2217?/52 where >! and >\" are the original and new wavelengths of the light, and ? is the distance between the slits and the screen. Since we are doubling the frequency, the new wavelength is halved, so >\" = >! / 2. Substituting this into the equation for .2, we get: .2 = >!/2 \u2217?/52 = ! \" \u2217 >! \u2217?/52 = ! \" \u2217. = 0.5 mm. Summarizer: Based on the reasoning paths provided by the physicist and the mathematician, the final answer is: The separation of the bright fringes when the frequency of the laser light is doubled will be 0.5 mm. CoMM Figure 3: Example results of CoMM on College Physics (the same problem as in Figure 1). 8 Limitations Even though our proposed CoMM framework has further pushed the upper bound of the reasoning ability of LLMs, the framework is still suffering from limitations. The proposed CoMM framework still requires task-specific design to define the experts and reasoning examples. However, this is a common limitation for all the CoT-style (Wei et al., 2022b) approaches. For example, the CoT approach needs specific designs for the few-shot examples with the chain of reasoning steps; the Thought baseline (Ma et al., 2023) requires specific thought experiment designs, and it only works on one specific benchmark (the Moral Scenario from MMLU (Hendrycks et al., 2020)). We leave the automatic prompting design for the CoMM framework as future work.", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2404.14527v1", + "title": "M\u00e9lange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity", + "abstract": "Large language models (LLMs) are increasingly integrated into many online\nservices. However, a major challenge in deploying LLMs is their high cost, due\nprimarily to the use of expensive GPU instances. To address this problem, we\nfind that the significant heterogeneity of GPU types presents an opportunity to\nincrease GPU cost efficiency and reduce deployment costs. The broad and growing\nmarket of GPUs creates a diverse option space with varying costs and hardware\nspecifications. Within this space, we show that there is not a linear\nrelationship between GPU cost and performance, and identify three key LLM\nservice characteristics that significantly affect which GPU type is the most\ncost effective: model request size, request rate, and latency service-level\nobjective (SLO). We then present M\\'elange, a framework for navigating the\ndiversity of GPUs and LLM service specifications to derive the most\ncost-efficient set of GPUs for a given LLM service. We frame the task of GPU\nselection as a cost-aware bin-packing problem, where GPUs are bins with a\ncapacity and cost, and items are request slices defined by a request size and\nrate. Upon solution, M\\'elange derives the minimal-cost GPU allocation that\nadheres to a configurable latency SLO. Our evaluations across both real-world\nand synthetic datasets demonstrate that M\\'elange can reduce deployment costs\nby up to 77% as compared to utilizing only a single GPU type, highlighting the\nimportance of making heterogeneity-aware GPU provisioning decisions for LLM\nserving. Our source code is publicly available at\nhttps://github.com/tyler-griggs/melange-release.", + "authors": "Tyler Griggs, Xiaoxuan Liu, Jiaxiang Yu, Doyoung Kim, Wei-Lin Chiang, Alvin Cheung, Ion Stoica", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "2.1 LLM Inference Optimization A significant body of research has focused on optimizing LLM inference. One stream concentrates on memory optimization, particularly through improved key-value cache reuse [54] and management strategies [21]. Another avenue seeks to minimize latency, such as scheduling optimization [51, 1, 47], speculative decoding [22], and kernel optimization [8, 42]. Additional optimizations include quantization [10, 23, 50] and sparsification [9]. Instead of altering inference logic, our work assumes a fixed inference engine configuration and concentrates on reducing LLM deployment costs by choosing cost-effective GPU instance types. 2.2 Machine Learning with Cloud Resources Recent studies have explored various strategies for reducing the cost of machine learning (ML) inference or training. Several focus on utilizing spot instances [43, 15, 52, 13] are complementary to our work. Other work targets deployment on heterogeneous resources [5, 6, 31, 28, 26], but focuses primarily on model training rather than serving. Leveraging serverless instances for inference cost reduction has been examined in [2]. Nonetheless, prior work predominantly concentrates on machine learning prior to the advent of LLMs, which we show to have unique characteristics that significantly impact cost efficiency. More recent studies, such as [27, 18], focus on LLMs, but they propose strategies for reducing costs via optimal migration plans and parallelism with heterogeneous resources. They do not identify the key LLM service characteristics that impact cost efficiency, which our work highlights. Another line of work [56, 38] explores splitting LLM inference into its two phases (prefill and decode) and performing the two phases on separate nodes, perhaps with different GPU types. Our work shows that, even within a phase, the best GPU type can change based on LLM service specifications.", + "pre_questions": [], + "main_content": "Introduction Large language models (LLMs) like GPT-4 [37] and the Llama model family [44, 45] are increasingly integrated into many online services, including search engines [39, 25], chatbots [36], and virtual assistants [29, 48, 49]. However, a significant obstacle in deploying LLM services is their high operational costs. The substantial size and computational demands of LLMs require the use of hardware accelerators, typically GPUs1, to achieve high-performance inference. Unfortunately, GPUs are expensive. For example, renting just a single on-demand NVIDIA A100 on a major cloud provider costs over $2, 600 per month, and many services require multiple A100s to serve especially large models or request volumes. Prior work [51, 54, 21] has introduced methods for increasing inference throughput to squeeze ever more performance out of expensive GPUs. However, less attention has been given to choosing the best GPU type(s) to use for a given LLM service. The broad and growing market of hardware accelerators, including NVIDIA GPUs [35], AMD GPUs [46], Google TPUs [20], CPUs [24], and more [4], creates a diverse option space with a wide range of hardware specifications and rental prices. Table 1 depicts the specs of just four NVIDIA GPUs, which already exhibits a large variety of costs and performance. Within this option space, we find that there is not a linear relationship between GPU cost and performance, which creates variations in GPU \u2217Equal contribution. 1For brevity, we use \u201caccelerator\u201d and \u201cGPU\u201d interchangeably in this work. 1 arXiv:2404.14527v1 [cs.DC] 22 Apr 2024 cost efficiency, defined based on common pricing models [36] as the sum of input and output tokens served per GPU dollar cost (T/$). Instead, we show that a GPU\u2019s cost efficiency is strongly impacted by three key LLM service characteristics: 1. Request Size: An LLM request\u2019s size is made up of its input and output token lengths. For small request sizes, we find that lower-end GPUs produce greater T/$ than their high-end GPU counterparts. Deployment expenses can be reduced by employing cheaper GPUs for smaller requests while reserving costly, high-capacity GPUs for handling larger request sizes. 2. Request Rate: To reduce resource waste, provisioned GPU capacity should align with request volume. An expensive under-utilized GPU exhibits lower T/$ than a cheaper GPU that still meets service demand. Therefore, at low request rates, services can reduce costs by right-sizing from expensive high-end GPUs to cheap low-end GPUs. At higher request rates, leveraging a mix of GPU types facilitates finer-grained resource scaling to better match request volume. 3. Service-level Objective: Services typically establish latency SLOs to ensure high service quality, with the specific SLO varying according to the service\u2019s interactivity needs. In general, low-end GPUs incur higher latency than high-end GPUs. As a result, low-end GPUs may only meet tight SLOs at a low output token rate (or not at all), severely limiting achieved T/$. Thus, high-end GPUs are often required for stringent latency SLOs, whereas low-end GPUs can reduce costs in loose-SLO settings. Consequently, we find that the under-appreciated heterogeneity of GPUs presents opportunities for increasing GPU cost efficiency and significantly reducing LLM service costs. Consider combining the three observations above into a single service deployment: high-cost A100s may be necessary to serve large requests within SLO requirements, however, low-cost A10Gs can meet the latency deadline for small requests at higher T/$, reducing overall cost. Then, during periods of low service activity, the even cheaper L4 can maintain service availability at the lowest cost. The key challenge, then, is to navigate the diversity of request sizes, request rates, latency SLOs, and GPU instance types to find the optimal GPU selection for a given LLM service. In this paper, we present M\u00b4 elange2, a framework that maximizes GPU cost efficiency by automatically and efficiently navigating the heterogeneity of GPUs and LLM service specifications to derive the best GPU provisioning strategy. M\u00b4 elange\u2019s strength stems from its heterogeneity-awareness, that is, its knowledge of how diverse LLM service characteristics impact the cost efficiency of each GPU type. M\u00b4 elange takes as input the service workload profile, latency SLO, and set of GPU type options, and produces the GPU allocation that minimizes deployment costs while attaining SLO. We formulate the task of GPU selection as a cost-aware bin-packing problem where bins are GPUs with an associated capacity and cost, and items are request slices defined by a request size and rate, and solve the bin-packing problem with an off-the-shelf integer linear programming (ILP) solver. M\u00b4 elange can be easily extended to include new GPU types (or other hardware) and alternative definitions of SLO, flexibly supporting diverse LLM service deployments. We evaluate M\u00b4 elange across four GPU types (L4, A10G, A100, and H100), three datasets with varying request size distributions, and a range of request rates and SLOs. Compared to using only a single GPU type, M\u00b4 elange\u2019s heterogeneity-aware mixed-GPU-type approach achieves 9-77% cost reduction in short-context workloads (interactive chats), 2-33% in long-context workloads (document-based tasks), and 4-51% in mixedcontext workloads (both in a single service). M\u00b4 elange efficiently derives GPU allocations within 1.2 seconds, and attains SLO for > 99.95% of requests at a loose SLO, and > 99.5% at a tight SLO. In summary, this paper makes the following contributions: \u2022 We present an extensive analysis of GPU cost efficiency and identify three key LLM service characteristics as significant determinants of GPU cost efficiency: request size, request rate, and latency SLO (\u00a7 4). \u2022 We introduce M\u00b4 elange to efficiently select the most cost-efficient set of GPU instances for a given LLM deployment, while ensuring that the resulting allocation satisfies a prescribed SLO requirement (\u00a7 5). \u2022 We evaluate M\u00b4 elange\u2019s efficacy, demonstrating its significant cost reductions (up to 77%) across a range of real-world workloads, GPU types, and SLO constraints (\u00a7 6). 2 Type L4 A10G (PCIe) A100-80G (SXM) H100 (SXM) On-demand Price ($/h) 0.7 1.01 3.67 7.5163 Instance Provider GCP AWS Azure RunPod Instance Name g2-standard-4 g5.xlarge NC24ads A100 v4/N.A. N.A. Memory (GB) 24 24 80 80 Memory Bandwidth (GB/s) 300 600 1935 3350 FP16 (TFLOPS) 242 125 312 1979 Table 1: Specifications of four NVIDIA GPUs: L4, A10G, A100, and H100. 3.1 LLM Request Size Variance Unlike traditional machine learning workloads, LLM tasks exhibit significant variance in request sizes, or input and output lengths. For example, ResNet [16] requires a fixed-dimension input (image size) and results in a fixed-dimension output (classification size). Conversely, transformer-based language models are flexible to support variable-length prompts and produce variable-length generation sequences, as in the Chatbot Arena dataset [53] derived from a real-world LLM chatbot service. Figure 10 illustrates the request size distributions of Chatbot Arena, demonstrating the extensive diversity of request sizes in practical scenarios. Unsurprisingly, high variance in request sizes introduces significant variation in request latency. As illustrated in Figure 1, request latency can increase by 110\u00d7 when the input/output length expands from 25 tokens to 2000 tokens for the Llama2-7B model served on an A100 GPU. Consequently, it becomes crucial to recognize that LLM requests, unlike non-transformer models, impose varied loads on GPU resources. 2M\u00b4 elange is the French word for \u201cmixture\u201d 3H100\u2019s hourly pricing was computed as described in the Hardware section above. 3 (a) LLaMA-7B 85X (b) LLaMA-70B Figure 1: Request latency of different input/output lengths on A100-80G. 3.2 Unknown Output Length In most online services, an LLM request\u2019s output length is not known a priori. In this paper, we evaluate GPU cost efficiency based on both input and output lengths. We do this to develop a holistic understanding of GPU cost efficiency, but M\u00b4 elange\u2019s GPU provisioning decision does not require specific knowledge of the output lengths of individual requests. Instead, it relies only on an estimated distribution of request sizes. We believe it is a fair assumption that a service\u2019s GPU allocator is given a distribution of expected request sizes based on the historical data of previously served input and output lengths. Because output lengths are a significant contributor to the load of individual requests, unknown output lengths are primarily a challenge for the load balancer, not the allocator. While important, the task of output length prediction for load balancing is orthogonal to M\u00b4 elange. Therefore, to evaluate the efficacy of M\u00b4 elange\u2019s GPU allocations, we use a load balancer that assumes knowledge of output lengths. We are actively working to remove this assumption by exploring load balancers based on output length prediction. There are several prior works that perform online LLM output length prediction with high accuracy [19, 55], but they have not been applied to load balancing. To the best of our knowledge, there is no load balancer that addresses the problem of unknown output lengths, and we believe this to be an promising area of future work. 4 GPU Cost Efficiency Analysis In this section, we analyze GPU cost efficiency in the context of LLM serving. We first describe our key definitions (\u00a7 4.1), then evaluate the cost efficiency of serving Llama2-7b on two widely used GPUs, NVIDIA\u2019s A100 [34] and A10G [33] to show that GPU cost efficiency is significantly influenced by request size(\u00a7 4.2), request latency SLO(\u00a7 4.3), and request rate(\u00a7 4.4). Finally, we validate the generality of our findings by extending our investigation to include additional hardware, specifically NVIDIA\u2019s H100 and L4 GPUs, and a larger model variant, Llama2-70B (\u00a7 4.5). For clarity, the plots are tagged with the request size, request rate, and SLO used to generate the plot. In each setting, we use vLLM-0.2.7 as the inference engine [21]. Results can differ across versions. 4.1 Definitions Service-level Objective (SLO). As in prior work [21, 56, 51], we use the average Time Per Output Token (TPOT) as our Service-level Objective (SLO). Average TPOT is determined by dividing total request latency by the number of generated tokens. In general, SLOs are application dependent: in-line code editors (e.g., GitHub Copilot [29]) require tight latency deadlines to suggest real-time code additions, whereas text summarization services may permit additional processing time to generate concise and accurate summaries for large documents. There are other popular definitions of SLO, such as time to first token and total request 4 (a) Equivalent input and output lengths (b) Input and output lengths vary independently Figure 2: Figure (a) depicts A10G and A100\u2019s relative T/$ across request sizes. Figure (b) expands (a) into separate input and output length dimensions. Tile colors indicate which GPU achieved higher T/$, and values represent the most cost efficient GPU\u2019s percent increase of T/$ relative to the less cost efficient GPU. latency. To simplify our discussion, we use only TPOT, however, M\u00b4 elange is flexible to support alternative definitions of SLO. Cost Efficiency Metric. We use tokens per dollar (T/$) to measure GPU cost efficiency, which is calculated by summing input and output token lengths served within some time period, and dividing the sum by the GPU\u2019s rental cost for the same period. The resulting value enables us to directly compare cost efficiency across GPU instance types with different rental costs. Pricing inference based on token lengths is a common practice in LLM services [36, 12], but some services set different prices for input and output tokens. We only compare T/$ between GPUs in settings where the request sizes are the same, so we do not lose generality to such cost models. In settings where request sizes differ, we report the overall cost of the GPU allocation that meets the aggregate workload. In general, we derive T/$ based on profiling a GPU at maximum saturation. When an SLO is specified, T/$ is calculated by finding the highest GPU saturation at which average TPOT still meets the SLO requirement. 4.2 Request Size and Cost Efficiency We now show that request sizes, shown to be widely varying (\u00a7 3.1), dramatically affect GPU cost efficiency. We served Llama2-7b on A100 and A10G (specifications reported in Table 1), and derived each GPU\u2019s T/$ at maximum GPU saturation across a range of request sizes, with results in Figure 2a. Interestingly, neither GPU achieves highest T/$ across the entire request size space. Instead, each GPU achieves greater cost efficiency within distinct regions of the request size spectrum. For smaller request sizes, A10G exhibits up to 2.6\u00d7 greater T/$ than A100. Conversely, for larger request sizes, A100 achieves up to 1.5\u00d7 the cost efficiency of A10G. We extend this exploration to include both input and output lengths in Figure 2b to observe how they affect cost efficiency separately. We find that the two dimensions influence cost efficiency in a similar manner: smaller sizes benefit A10G, and larger sizes are best served on A100. Once again, there exists a clear boundary within the input/output length spectrum where the cost efficiency advantage shifts from A10G to A100 as request sizes increase. In fact, selecting a single GPU type to serve requests across the entire request size space misses opportunities to produce up to 72% more output tokens for the same cost. Source of Cost Efficiency Variation. Digging deeper into why request size impacts relative cost efficiency between GPUs, we find that it is largely due to the heterogeneity of GPU hardware. Given that batch size directly influences throughput (i.e., request processing rate), we inspect the source of cost efficiency variation by examining the effect of request size on achieved batch size. Figure 3 depicts the absolute batch sizes and 5 (a) Absolute batch sizes (b) Dollar-normalized batch sizes Figure 3: (a) depicts the absolute batch sizes of A10G and A100 serving Llama2-7b at maximum saturation, (b) reports the same batch sizes divided by GPU cost, plotting with respect to A10G. batch sizes normalized by instance cost of each GPU at maximum saturation. Note that Figure 3b closely resembles Figure 2a\u2019s plot of relative T/$ at maximum saturation, verifying that batch size indeed serves as a proxy for throughput. A10G and A100 have similar dollar-normalized batch sizes at 250 input/output tokens, but as the request size increases to 2000 input/output tokens, A10G\u2019s absolute batch size decreases by a factor of 9\u00d7 whereas A100\u2019s only decreases by 6\u00d7 due to its superior memory size and bandwidth. As a result, A100\u2019s cost efficiency advantage over A10G increases accordingly with the increase in request size. In contrast, reducing the request size from 250 to 25 input/output tokens sees A10G\u2019s batch size expanding by 15.2\u00d7, whereas A100\u2019s growth is more modest at 5.89\u00d7. We find that this difference is primarily due to the interference of mixing prefill and decode phases of a greater number of requests, as demonstrated in prior work [17]. Because A100\u2019s batch sizes are larger in absolute terms, A100 is more significantly constrained by per-request latency overheads than A10G is. As a result, A10G\u2019s dollar-normalized batch size exceeds A100\u2019s at short request lengths, leading to greater overall T/$ for A10G. This illustrative case demonstrates how the interaction between request size and achieved T/$ can be subtle, and creates a cost efficiency trade-off space among GPU types. Key Takeaways: GPU cost efficiency is highly dependent on the sizes of requests served. Within the request size space, there are regions where serving with different GPU types is the most cost-effective. In general, lower-end GPUs are more cost-effective for small request sizes whereas higher-end GPUs are best for large request sizes. 4.3 SLO and Cost Efficiency In this section, we show the impact of SLO on cost efficiency. We measure T/$ by finding the maximum saturation of each GPU while average TPOT remains below SLO, and repeat this across several TPOT deadlines (40ms to 120ms) as shown in Figure 4. Under tight SLO constraints (<60ms), A100 demonstrates significantly greater T/$ than A10G (> 2\u00d7) due to A10G\u2019s higher processing latency, which severely limits its output token rate. However, as the SLO is gradually loosened (60-120ms), A10G\u2019s higher latency is less problematic, dramatically increasing its T/$ and surpassing that of A100 (by > 40%). In general, when SLO is stringent, high-end low-latency GPUs are the most viable option because cheaper high-latency GPUs are unable to meet the steep performance requirements. Loosening the SLO increasingly permits the use of cheaper GPUs that can meet the reduced performance requirements at much lower cost. Further, Figure 5 highlights the interplay between SLO and request size to show that neither can be considered in isolation when determining cost efficiency. Varying the latency SLO adjusts the boundary in the request size space between which different GPU types are more cost effective, and also impacts the degree to which 6 Figure 4: T/$ comparison between A10G and A100 across a range of TPOT SLO parameters. Figure 5: Relative increase in T/$ when combining SLO and request size. Shaded areas indicate regions where A10G fails to satisfy the specified SLO. one GPU is more cost effective than the other. For example, with a 40-50ms SLO, A100 always has higher T/$ (by up to 123%). At 70ms, A10G shows modest benefit over A100 for small request sizes. And at 100-120ms, A10G demonstrates much greater T/$ advantage over A100 for the same request sizes (up to 61%). Key Takeaways: To comply with strict SLOs, expensive GPUs are often necessary due to the increased latency of cheaper GPUs. However, as SLO is loosened, low-end GPUs can be used to cut deployment costs. 4.4 Request Rate and Cost Efficiency In this section, we show how request rates influence which GPU, or set of GPUs, is the most cost-effective. Figure 6 illustrates the cost of serving Llama2-7b for a range of request rates using three provisioning strategies: only A10Gs, only A100s, or a mix of both A10Gs and A100s. The y-axis is absolute cost instead of T/$ because each provisioning strategy serves the same request rates and thus the same output tokens; only the cost varies across strategies. As the request rate increases, A100-only is increasingly more cost-effective relative to A10G-only. This is because the requests in this plot were of size [1000 in tokens, 250 out tokens], which \u00a7 4.2 shows is more cost effective on A100. However, even in this case, the A10G-only strategy still presents benefits at low request rates (0 \u22121.5 req/s). Idle periods of low activity are common in real-world services, and the GPU deployment should right-size to the cheaper GPU (here, A10G) when a higher-end GPU (here, A100) is drastically under-utilized. Further, a notable finding is that a hybrid deployment approach, combining both A10G and A100 GPUs, yields the greatest cost efficiency. Because A100s have such large capacity, scaling with only A100s is coarse-grained and leads to under-utilized resources. Instead, A10Gs and A100s can be mixed such that A100s satisfy the bulk of the service demands, while A10Gs handle the remaining load at reduced cost. Key Takeaways: Provisioning a mix of GPU types enables finer-grained resource scaling decisions, which boosts cost efficiency by better utilization of the provisioned instances. At low request rates, LLM deployments should right-size to cheaper low-end GPUs instead of under-utilizing expensive high-capacity GPUs. At higher request rates, a mix of GPU types can be used to better match request load. 4.5 Other Models and Hardware In this section, we demonstrate the generality of our findings by including additional GPU types and a larger model variant (Llama2-70b) to our analysis. In Figure 8, we present relative cost efficiency across four types 7 of GPUs, and observe a progression of the most cost efficient GPU from L4 to A10G, then A100, and finally H100 as the input/output lengths extend. This pattern underscores the advantage of high-end GPUs for processing longer context and output lengths, while low-end GPUs emerge as more cost-effective for shorter input/output scenarios. Similar trends are observed with the Llama2-70B model when comparing the H100 and A100 GPUs, as detailed in Figure 7, reinforcing these insights. Key Takeaways: The effects of request size on GPU cost efficiency (\u00a7 4.2) generalize to settings with several GPU types and larger model sizes, and similarly leads to significant GPU cost efficiency variations in the request size space. Figure 6: Aggregate GPU hourly rental cost at different request rates. A mix of A100 and A10G consistently achieves the lowest cost. Figure 7: T/$ comparison between H100x2 and A100x2 serving Llama2-70b. 5 M\u00b4 elange: Automating Cost-Efficient GPU Selection Building on the analysis in Section 4, we present M\u00b4 elange, a framework that automates the selection of GPU instances to meet an LLM service\u2019s demand at minimal cost while adhering to SLO constraints. We frame the GPU selection task as a cost-aware bin-packing problem with GPUs as bins and requests as items, and employ Integer Linear Programming (ILP) to derive the solution. 5.1 Problem Formulation We begin by defining the key terms utilized in our problem formulation and solution. Workload: A workload is characterized by its overall request rate along with a distribution of input and output sizes. Given the inherent variability in request sizes, it is crucial to treat the input and output sizes not as fixed values, but as distributions spanning a range of possible lengths. Specifically, as illustrated in Figure 9, a workload is a histogram where each bucket corresponds to a range of request sizes and a bucket\u2019s value is the request rate of requests within the bucket\u2019s size range. Deployment Cost: Cost is computed by summing the hourly rates for each of the selected instances. SLO: We use average TPOT to define SLO, however, M\u00b4 elange can be extended to other definitions of SLO, such as time to first token (TTFT), by profiling maximum T/$ within SLO constraints described in \u00a7 4.1 for any given latency constraint definition. Problem Definition: Given a workload, GPU instance costs, and SLO requirements, our objective is to provision GPUs that can serve the workload at minimal cost while adhering to latency SLO constraints. 8 (a) Best GPU relative to second best GPU (b) Best GPU relative to worst GPU Figure 8: Cost efficiency comparison across four GPUs. Tile colors indicate which GPU achieves greatest T/$ at max saturation for the respective request size. Tile values in (a) are the percent increase in T/$ of the best GPU compared to the second best. Tile values in (b) compare the best GPU to the worst GPU. Black boxes indicate request sizes for which only A100 and H100 are compared because A10G and L4 have too small memory capacity to handle a single request within this size, with more detail in \u00a7 6.2 . 3x A10G 2x A100 1x H100 Obj: Minimize Cost Constraint: Meet SLO Figure 9: Workflow illustration depicting the process of segmenting request rates into slices, followed by the allocation of hardware resources based on solver recommendations. 5.2 Allocation Algorithm The intuition of M\u00b4 elange\u2019s solution is to find the minimal-cost set of GPUs (bins) into which the workload (items) can be bin-packed. To do so, our strategy partitions workload buckets into slices, then assigns the slices to GPUs. Our constraints ensure that the load added to each GPU by the assigned slices does not surpass its maximum capacity. The optimization objective is to reduce the total deployment cost. We discuss bucket size considerations (\u00a7 5.2.1), describe slices in more detail (\u00a7 5.2.2), discuss how load is calculated (\u00a7 5.2.3), then finally detail our ILP formulation (\u00a7 5.2.4). 5.2.1 Request Buckets As described in \u00a7 5.1, a workload is represented by a histogram. The histogram has two dimensions, input length and output length, and each bucket\u2019s value is the aggregate request rate for requests within the bucket\u2019s size range. We make the simplifying assumption that the load (see \u00a7 5.2.2) of each request is the same as the largest request size in the same bucket. This simplifies handling diverse request sizes at the cost of over-estimating the load. Bucket sizes can be tuned to reach the desired balance between granularity and solution complexity, but we have not found overall performance to be sensitive to bucket sizes. 9 5.2.2 Slices A naive bin-packing of the workload into GPUs is to assign each bucket to a single GPU. However, the overall load of a single bucket may exceed the capacity of a single GPU, and the bucket may be most cost effectively served by splitting across different GPU types. Therefore, for finer-grained bin-packing, buckets are broken down into slices, which are characterized by a request size and rate. A parameter, slice factor, indicates the number of slices that each bucket is divided into. In a setting with a slice factor of 8 and a bucket corresponding to requests of size [25 \u2212100 in tokens, 25 \u2212100 out tokens] with a request rate of 4 requests/s, the bucket would be segmented into 8 slices each corresponding to a request rate of 0.5 requests. 5.2.3 Load The ILP solver requires an estimate of the load each slice contributes to a GPU to ensure that the load assigned to an instance does not exceed its capacity and violate the latency SLO. The load of a slice with request size s and rate r on GPU G is calculated as r MaxT put(G,s,SLO), where MaxTput(G, s, SLO) is the maximum request/s G can achieve for requests of size s while remaining under the latency deadline SLO. For instance, if MaxTput(G, s, SLO) = 10 reqs/s and r = 1, the load is calculated as 1/10 = 0.1 or 10%. Each GPU\u2019s maximum capacity is defined as 1 (or 100%). This approximation allows us to calculate the aggregate load of requests with differing sizes and rates. Prior work has proposed cost models for LLM requests [32, 41], but there is not yet a definitive formulation. We found our simple approximation to perform well, but it can be easily replaced with alternative cost models. Based on offline profiling, we compute MaxTput(G, s, SLO) for each bucket in the workload histogram. 5.2.4 ILP Formulation We now describe our ILP formulation. We formulate the problem with two primary decision variables. First, let A be a matrix {0, 1}N\u00d7M, where N denotes the total number of slices, and M represents the number of GPU instance types. An element Ai,j within this matrix is set to 1 if slice i is assigned to GPU type j, and 0 otherwise. The second decision variable, B, is a vector of length M, where each element Bj specifies the number of GPUs of type j to be allocated. cj denotes the cost of GPU type j and ri is the request rate of slice i. L is computed offline by the process described in \u00a7 5.2.3, and element Li,j is the percent load of 1 req/s of slice i\u2019s request size on GPU type j. Our objective is to minimize the total GPU allocation cost, with the following mathematical representation: The ILP constraints are as follows. First, each task slice is assigned to exactly one GPU type: Second, for each GPU type, the total number of GPUs designated in vector B must adequately accommodate the cumulative load prescribed to it in matrix A: Lastly, the entries within matrix A are binary, and the elements of vector B are non-negative integers: arg min B ( M X j=1 Bj \u00b7 cj) (1) \u2200i \u2208{1, . . . , N}, M X j=1 Ai,j = 1 (2) \u2200j \u2208{1, . . . , M}, N X i=1 Ai,j \u00b7 Li,j \u00b7 ri \u2264Bj (3) \u2200i \u2208{1, . . . , N}, \u2200j \u2208{1, . . . , M}, Ai,j \u2208{0, 1} (4) \u2200j \u2208{1, . . . , M}, Bj \u22650 (5) Upon resolving equations (1) through (5), the decision variable B holds the minimal-cost set of GPUs that meet the SLO constraint. We use an off-the-shelf solver to solve the ILP problem [30]. 10 6 Evaluation In this section, we assess the performance of M\u00b4 elange using four GPU types across settings of diverse request sizes, rates, and SLOs. Our evaluations show that M\u00b4 elange consistently achieves significant cost savings (up to 77%) compared to single-GPU-type strategies and M\u00b4 elange\u2019s selected GPU allocations successfully attain TPOT SLO for over 99.5% of requests. 6.1 Methodology Hardware Setup. We use four NVIDIA GPUs in our evaluations, with specifications detailed in Table 1. To determine GPU cost, we select the lowest on-demand price available from major cloud providers (AWS, Azure, and GCP). Because on-demand H100 is not offered by these major providers, we defer to the pricing from RunPod [40] due to its popularity and availability. To ensure fair cost comparisons, we normalize RunPod\u2019s H100 pricing to match the pricing structures of major platforms. We calculate this by comparing RunPod\u2019s H100 cost ($4.69) to RunPod\u2019s A100-80G cost ($2.29), then adjusting relative to the A100\u2019s price on major clouds ($3.67), resulting in a normalized price of (4.69/2.29) \u00d7 3.67 = $7.516 for H100. Model and Inference Engine. In each experiment, we serve Llama2-7B (Llama-2-7b-hf) [45] using version 0.2.7 of the vLLM inference engine [21] with default parameters. M\u00b4 elange Parameters. The bucket ranges correspond to Figure 8 and comprise of 10 input length ranges and 6 output length ranges, for a total of 60 buckets. The slice factor is set to 8, for a total of 60 \u00b7 8 = 480 slices. Datasets. We evaluate across three distinct datasets to cover a wide range of application scenarios. The specific input/output length distributions of these datasets are illustrated in Figure 10. \u2022 Short context: This scenario simulates real-time conversational dynamics by employing the Chatbot Arena dataset (lmsys/lmsys-chat-1m) [53], which is derived from real-world chatbot conversations. The dataset is skewed towards shorter context (< 2000 tokens) because much of the data was generated in conversation with models that did not yet have a larger context window. \u2022 Long context: This scenario represents tasks with extensive input, such as summarization. We utilize the PubMed dataset (ccdv/pubmed-summarization) [7], comprising 133 thousand scientific papers from PubMed.com, a popular dataset for large-scale text summarization studies. \u2022 Mixed long/short context: This scenario captures settings with a combination of long and short context, such as an assistant that engages in succinct dialogue and responds to large document-based queries. To model this, we create a synthetic dataset by sampling 80% of requests from the Arena dataset and 20% of requests from the PubMed dataset. SLOs. We referred to current LLM inference benchmarks [3] to set TPOT SLOs, and opted for 40ms in contexts where swift responses are essential, and 120ms where longer response times are acceptable. Both selected SLOs surpass the average human reading speed, ensuring that our SLOs align with practical user experience considerations. However, as discussed in \u00a7 4.1, M\u00b4 elange is flexible to support alternative definitions of SLO. Baselines. We benchmark against deployments that utilize solely one GPU type. To derive baseline GPU allocations, we use the same ILP formulation from \u00a7 5.2.4 but restrict the solver to only a single GPU type. 6.2 Cost Savings Analysis We first compare the overall deployment costs of M\u00b4 elange\u2019s allocation compared to the single-GPU-type baselines for each dataset and SLO across a range of request rates (1-32 requests/s). Figure 11 displays all costs normalized against the cost of the A100-only strategy (shown in blue dotted lines), and the detailed GPU allocations are included in Appendix A.1. A10G-only and L4-only provisioning strategies are only 11 0 2500 5000 7500 10000 12500 Input Length (tokens) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Fraction Dataset Mixed (mean=1278.04) Arena (mean=329.43) Pubmed (mean=4174.13) (a) Input length distributions. 0 250 500 750 1000 Output Length (tokens) 0.000 0.025 0.050 0.075 0.100 0.125 Fraction Dataset Mixed (mean=219.87) Arena (mean=195.66) Pubmed (mean=314.1) (b) Output length distributions. Figure 10: Dataset input and output length distributions. included in the Arena dataset analysis because of PubMed and Mixed datasets\u2019 large requests. The key-value cache generated from even a single large request (\u223c12000+ tokens) exceeds the memory capacity of L4 and A10G (24GB). In M\u00b4 elange\u2019s allocation, L4 and A10G are included but restricted to only serve requests of size less than 12000 tokens. Loose SLO: 120ms Figures 11a, 11c, and 11e depict results with a loose 120ms TPOT SLO. M\u00b4 elange\u2019s mixed-GPU allocation is consistently the most cost-efficient approach, achieving cost reductions of up to 77%, 33% and 51% across the three evaluated datasets. \u2022 Arena Dataset. In Figure 11a, M\u00b4 elange achieves 15-77% cost savings. Lower-tier GPUs such as A10G/L4 offer superior cost efficiency in comparison to A100/H100 when handling lower request rates. In particular, for 1-2 requests/s, H100 has egregiously high cost because the load is not enough to even saturate a single GPU. Yet, as request rate increases, A10G/L4\u2019s cost advantage diminishes as high-capacity GPUs become a more reasonable choice. This aligns with findings in \u00a7 4.4 that emphasize matching GPU size with request rate. Note, however, that A10G/L4 are still competitive with A100 at higher request rates due to their T/$ advantage for smaller request sizes. \u2022 PubMed Dataset. In Figure 11c, M\u00b4 elange achieves 15-33% cost savings. H100\u2019s cost efficiency generally outperforms A100\u2019s, attributable to the dataset\u2019s longer context lengths for which H100 achieves higher T/$. However, there are still many request sizes for which A100 is the best, and this creates the opportunity for the mixed-GPU strategy to squeeze up to 25% cost savings. \u2022 Mixed Dataset. In Figure 11e, M\u00b4 elange achieves 13-51% cost savings. A100\u2019s cost efficiency is boosted relative to the PubMed dataset due to it being generally more cost efficient than H100 for small request sizes. This distinction highlights how the nature of the workload \u2014 specifically the variance in request lengths \u2014 can significantly influence relative cost efficiency across GPU types. Strict SLO: 40ms Figures 11b, 11d, and 11f depict the results from tightening the TPOT SLO to 40ms. Once again, M\u00b4 elange achieves the lowest cost in all settings, with up to 68%, 22%, and 51% reduction across the three evaluated datasets. \u2022 Arena Dataset In Figure 11b, M\u00b4 elange achieves 9-68% cost savings. A10G/L4 display considerably higher relative cost than in the loose SLO setting (Figure 11a). This is explained by A10G/L4\u2019s higher latency, which requires many more instances to be provisioned in order to meet the tight SLO deadline. M\u00b4 elange\u2019s mixed-GPU strategy is able to adapt to the strict SLO and provision mostly A100/H100\u2019s which exhibit much lower latencies. \u2022 PubMed Dataset In Figure 11d, M\u00b4 elange achieves 2-22% cost savings. H100 achieves a significant cost advantage over A100, especially relative to the 120ms setting ( 11c). H100 generally achieves lower latency than A100, making it the preferred option for long-context tight-SLO settings. 12 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 A10G L4 Mix (a) Short context: Arena, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 3 Cost (w.r.t A100) H100 A100 A10G L4 Mix (b) Short context: Arena, SLO = 40ms. 1 2 4 8 16 32 Request Rate (req/s) 0.0 0.5 1.0 Cost (w.r.t A100) H100 A100 Mix (c) Long context: PubMed, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0.0 0.5 1.0 Cost (w.r.t A100) H100 A100 Mix (d) Long context: PubMed, SLO = 40ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 Mix (e) Mixed long/short context, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 Mix (f) Mixed long/short context, SLO = 40ms. Figure 11: Deployment cost across different datasets and SLOs. \u2022 Mixed Dataset In Figure 11f, M\u00b4 elange achieves 4-51% cost savings. A100 gains back some advantage over H100 relative to the PubMed setting due to the prevalence of shorter-context requests. Experiment Takeaways In loose SLO settings, M\u00b4 elange can utilize all GPU types (both lowand high-end) to serve request sizes for which they achieve greatest T/$ and closely match capacity to the request volume, significantly reducing costs (up to 77%). In tight SLO settings, A10G and L4 are less beneficial due to their high latency, reducing the cost savings M\u00b4 elange can achieve relative to single-GPU-type strategies. However, even in this setting, M\u00b4 elange squeezes large cost savings (up to 67%) based on the same principles. These evaluations highlight the key benefits of exploiting GPU heterogeneity in a unified allocation strategy: 1) GPU types can serve request sizes for which they have greatest T/$, 2) mixing GPU types enables fine-grained provisioning to closely match capacity to request volume, and 3) the allocation strategy can adapt to differing SLO stringency levels and continue to utilize the benefits of (1) and (2). In summary, M\u00b4 elange efficiently navigates the diversity of request sizes, rates, SLOs, and GPU types to automatically find the best GPU allocation and significantly reduce deployment cost. 6.3 SLO Satisfaction Next, we assess M\u00b4 elange\u2019s ability to select GPU allocations that meet the specified TPOT SLO. To do so, we provision actual cloud GPU instances based on M\u00b4 elange\u2019s selected allocation for each of the six experiment 13 Figure 12: Experiment TPOT CDFs. Figure 13: TPOT CDF from unknown output length experiment. settings in 6.2 at 4 requests/s. We deploy Llama2-7b with vLLM-0.2.7 on each of the provisioned GPUs. We sample request sizes randomly from the chosen dataset to serve 2000 live requests. We record the latency of each request and divide by output token length to derive average TPOT. Load Balancer. Most settings use multiple GPU instances, requiring a load balancer to distribute requests across them. The problem of load balancing variable-size requests to heterogeneous backends has been previously explored [11], and we leave it to future work to create adaptations for serving LLMs on heterogeneous GPUs. We instead use a simple variation of Join Shortest Queue (JSQ) routing [14]: the load balancer tracks outstanding requests for each GPU, and converts them to percent load as described in \u00a7 5.2.3. Upon receiving a new request, the load balancer chooses a GPU backend such that the resulting percent load on the chosen GPU is minimized relative to choosing any other GPU. This policy performed well in our experiments, but we expect that improvements to the load balancing policy will reduce tail latency. Results. Figure 12 presents CDFs of the observed average TPOTs across experiments. With an SLO of 120ms, over 99.95% of requests met SLO. When the SLO was tightened to 40ms, SLO adherence reduced to over 99.5% of requests. M\u00b4 elange effectively chose GPU allocations that reduce cost while adhering to latency objectives, however, we recognize that services may require even higher SLO adherence, so we investigated the source of SLO violations in our experiment. SLO Violation Investigation. Of all requests that violated TPOT SLO, we found that 84% failed to meet SLO due to one of two reasons: request rate bursts or co-location with large requests. In our experiments, requests are sent according to a Poisson process, which occasionally creates short-lived bursts that overload the GPU capacity. Further, we choose the size of model request by randomly sampling from the configured dataset. Occasionally, several large requests are chosen in sequence, which can temporarily exceed the service capacity. In an online production environment, it is common practice to over-provision resources in order to absorb such bursts and other load variations. Within our framework, a desired over-provisioning rate (say, 20%) can be achieved by increasing the request rate input to the solver by the same proportion (20%). We discuss the future work of practically deploying a system based on M\u00b4 elange in \u00a7 7. 6.4 Unknown Output Length As discussed in \u00a7 3.2, in order to focus on measuring the quality of M\u00b4 elange\u2019s chosen GPU allocation, our evaluations utilize a load balancing policy that knows output lengths. Given that this is not a realistic assumption, we briefly evaluate M\u00b4 elange\u2019s performance with a simple load balancing policy that is unaware of output lengths. We again note that we are actively working on addressing the limitations of unknown output lengths, and believe that LLM-specific load balancing that addresses this challenge is an exciting area for future work. We repeated the SLO satisfaction experiment (\u00a7 6.3) on the Arena dataset with a TPOT SLO of 40ms, but restricted the load balancer to only see request input lengths. The load balancer estimates output length by computing the average of all previous requests\u2019 output lengths. Otherwise, load balancing is 14 Request Rate Arena, SLO=120ms Arena, SLO=40ms PubMed, SLO=120ms PubMed, SLO=40ms Mix, SLO=120ms Mix, SLO=40ms 1 0.137 0.177 0.232 0.295 0.168 0.336 2 0.194 0.265 0.234 0.334 0.253 0.381 4 0.192 0.346 0.287 0.381 0.297 0.459 8 0.248 0.433 0.269 0.384 0.321 0.545 16 0.299 0.448 0.389 0.509 0.439 0.537 32 0.316 0.494 0.791 0.96 0.912 1.14 Table 2: Solver execution time. performed identically to experiments in \u00a7 6.3. Figure 13 presents the experiment\u2019s TPOT CDF. Only 97.2% of requests met the 40ms deadline, compared to 99.5% in the setting where output length is known, a 5.6\u00d7 increase in SLO violations. Almost all (91%) of the additional SLO violations were due to large requests landing on a lower-end GPU that would have otherwise landed on a higher-end GPU if the output length was known. This result demonstrates that errors in estimating output length can manifest as increased tail latency due to poor load balancing decisions, further motivating future work on load balancing for LLMs. Nevertheless, we show that over-provisioning can account for the error in predicting output lengths. We re-ran the experiment, but inflated M\u00b4 elange\u2019s request rate input by 5%, and observed that SLO adherence jumped back up to over 99.5%. 6.5 Solver Time We present the solver execution time from each experiment in Table 2. Across all datasets and request rates, the solver\u2019s execution time remains under 1.2 seconds, which is negligible compared to workload execution time. We observe a modest increase in solver execution time with higher request volumes, attributed to the greater complexity in slice assignment due to a greater number of required GPUs. However, this increase is sub-linear relative to the increase in request rate, and the solver\u2019s execution time remains practical. Further, the execution of the solver is a one-time event. Users are required to run the solver only prior to deployment or when there is a significant change in the distribution of request sizes or rates. 7 Future Work There are several interesting directions related to leveraging heterogeneous GPUs for LLM serving. First, adapting heterogeneity-aware load balancing policies specifically for LLM systems where output length is unknown could reduce tail latency that occur due to poor balancing decisions. Further, we believe that generative models beyond LLMs, including image generation, video generation, and embedding models, each of which could be benefited by heterogeneous serving systems. Finally, M\u00b4 elange effectively derives the best GPU allocation for a fixed workload distribution and request rate, but does not address other challenges of deploying a live LLM service such as handling GPU unavailability or responding to dynamically changing request rate and request size distribution. 8 Conclusion In this study, we conduct an analysis of GPU cost efficiency in LLM service deployments, and identify three key factors (request sizes, request rates, and Service Level Objectives (SLOs)) that significantly impact GPU cost efficiency. Based on these findings, we introduce M\u00b4 elange, a framework for deriving the minimal-cost GPU allocation that attains SLO for a given LLM service specification. We frame the task of GPU selection as a cost-aware bin-packing problem and formulate it as an integer linear program. Through evaluations on a range of GPUs, request sizes, request rates, and latency SLOs, M\u00b4 elange consistently demonstrates significant reductions in deployment costs (up to 77%) while providing high SLO attainment. 15" + }, + { + "url": "http://arxiv.org/abs/2211.10438v7", + "title": "SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models", + "abstract": "Large language models (LLMs) show excellent performance but are compute- and\nmemory-intensive. Quantization can reduce memory and accelerate inference.\nHowever, existing methods cannot maintain accuracy and hardware efficiency at\nthe same time. We propose SmoothQuant, a training-free, accuracy-preserving,\nand general-purpose post-training quantization (PTQ) solution to enable 8-bit\nweight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that\nweights are easy to quantize while activations are not, SmoothQuant smooths the\nactivation outliers by offline migrating the quantization difficulty from\nactivations to weights with a mathematically equivalent transformation.\nSmoothQuant enables an INT8 quantization of both weights and activations for\nall the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG,\nLlama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x\nspeedup and 2x memory reduction for LLMs with negligible loss in accuracy.\nSmoothQuant enables serving 530B LLM within a single node. Our work offers a\nturn-key solution that reduces hardware costs and democratizes LLMs. Code is\navailable at https://github.com/mit-han-lab/smoothquant.", + "authors": "Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han", + "published": "2022-11-18", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.01188v2", + "title": "Petals: Collaborative Inference and Fine-tuning of Large Models", + "abstract": "Many NLP tasks benefit from using large language models (LLMs) that often\nhave more than 100 billion parameters. With the release of BLOOM-176B and\nOPT-175B, everyone can download pretrained models of this scale. Still, using\nthese models requires high-end hardware unavailable to many researchers. In\nsome cases, LLMs can be used more affordably via RAM offloading or hosted APIs.\nHowever, these techniques have innate limitations: offloading is too slow for\ninteractive inference, while APIs are not flexible enough for research that\nrequires access to weights, attention or logits. In this work, we propose\nPetals - a system for inference and fine-tuning of large models collaboratively\nby joining the resources of multiple parties. We demonstrate that this strategy\noutperforms offloading for very large models, running inference of BLOOM-176B\non consumer GPUs with $\\approx$ 1 step per second, which is enough for many\ninteractive LLM applications. Unlike most inference APIs, Petals also natively\nexposes hidden states of served models, allowing to train and share custom\nmodel extensions based on efficient fine-tuning methods.", + "authors": "Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel", + "published": "2022-09-02", + "updated": "2023-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.00774v3", + "title": "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot", + "abstract": "We show for the first time that large-scale generative pretrained transformer\n(GPT) family models can be pruned to at least 50% sparsity in one-shot, without\nany retraining, at minimal loss of accuracy. This is achieved via a new pruning\nmethod called SparseGPT, specifically designed to work efficiently and\naccurately on massive GPT-family models. We can execute SparseGPT on the\nlargest available open-source models, OPT-175B and BLOOM-176B, in under 4.5\nhours, and can reach 60% unstructured sparsity with negligible increase in\nperplexity: remarkably, more than 100 billion weights from these models can be\nignored at inference time. SparseGPT generalizes to semi-structured (2:4 and\n4:8) patterns, and is compatible with weight quantization approaches. The code\nis available at: https://github.com/IST-DASLab/sparsegpt.", + "authors": "Elias Frantar, Dan Alistarh", + "published": "2023-01-02", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.00978v4", + "title": "AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration", + "abstract": "Large language models (LLMs) have fundamentally transformed the capabilities\nof numerous applications, from natural language processing to more intricate\ndomain-specific tasks in robotics and autonomous driving. Moreover, the\nimportance of on-device LLMs has grown significantly in the recent years.\nRunning LLMs on edge devices not only promises reduced latency and improved\nuser experience but also aligns with the increasing need for user privacy, as\ndata processing can occur locally. However, the astronomical model sizes of\nmodern LLMs and constraints of the edge devices, primarily in terms of memory\nsize and bandwidth, pose significant deployment challenges. In this paper, we\npropose Activation-aware Weight Quantization (AWQ), a hardware-friendly\napproach for LLM low-bit weight-only quantization. Our method is based on the\nobservation that weights are not equally important: protecting only 1% of\nsalient weights can greatly reduce quantization error. We then propose to\nsearch for the optimal per-channel scaling that protects the salient weights by\nobserving the activation, not weights. AWQ does not rely on any backpropagation\nor reconstruction, so it can well preserve LLMs' generalization ability on\ndifferent domains and modalities, without overfitting to the calibration set.\nAWQ outperforms existing work on various language modeling and domain-specific\nbenchmarks (coding and math). Thanks to better generalization, it achieves\nexcellent quantization performance for instruction-tuned LMs and, for the first\ntime, multi-modal LMs. Alongside AWQ, we implement TinyChat, an efficient and\nflexible inference framework tailored for on-device LLM/VLMs, offering more\nthan 3x speedup over the Huggingface FP16 implementation on both desktop and\nmobile GPUs. It also democratizes the deployment of the 70B Llama-2 model on\nmobile GPUs.", + "authors": "Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, Song Han", + "published": "2023-06-01", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.16369v1", + "title": "SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills", + "abstract": "Large Language Model (LLM) inference consists of two distinct phases -\nprefill phase which processes the input prompt and decode phase which generates\noutput tokens autoregressively. While the prefill phase effectively saturates\nGPU compute at small batch sizes, the decode phase results in low compute\nutilization as it generates one token at a time per request. The varying\nprefill and decode times also lead to imbalance across micro-batches when using\npipeline parallelism, resulting in further inefficiency due to bubbles.\n We present SARATHI to address these challenges. SARATHI employs\nchunked-prefills, which splits a prefill request into equal sized chunks, and\ndecode-maximal batching, which constructs a batch using a single prefill chunk\nand populates the remaining slots with decodes. During inference, the prefill\nchunk saturates GPU compute, while the decode requests 'piggyback' and cost up\nto an order of magnitude less compared to a decode-only batch. Chunked-prefills\nallows constructing multiple decode-maximal batches from a single prefill\nrequest, maximizing coverage of decodes that can piggyback. Furthermore, the\nuniform compute design of these batches ameliorates the imbalance between\nmicro-batches, significantly reducing pipeline bubbles.\n Our techniques yield significant improvements in inference performance across\nmodels and hardware. For the LLaMA-13B model on A6000 GPU, SARATHI improves\ndecode throughput by up to 10x, and accelerates end-to-end throughput by up to\n1.33x. For LLaMa-33B on A100 GPU, we achieve 1.25x higher end-to-end-throughput\nand up to 4.25x higher decode throughput. When used with pipeline parallelism\non GPT-3, SARATHI reduces bubbles by 6.29x, resulting in an end-to-end\nthroughput improvement of 1.91x.", + "authors": "Amey Agrawal, Ashish Panwar, Jayashree Mohan, Nipun Kwatra, Bhargav S. Gulavani, Ramachandran Ramjee", + "published": "2023-08-31", + "updated": "2023-08-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.09670v2", + "title": "DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving", + "abstract": "DistServe improves the performance of large language models (LLMs) serving by\ndisaggregating the prefill and decoding computation. Existing LLM serving\nsystems colocate the two phases and batch the computation of prefill and\ndecoding across all users and requests. We find that this strategy not only\nleads to strong prefill-decoding interferences but also couples the resource\nallocation and parallelism plans for both phases. LLM applications often\nemphasize individual latency for each phase: time to first token (TTFT) for the\nprefill phase and time per output token (TPOT) of each request for the decoding\nphase. In the presence of stringent latency requirements, existing systems have\nto prioritize one latency over the other, or over-provision compute resources\nto meet both.\n DistServe assigns prefill and decoding computation to different GPUs, hence\neliminating prefill-decoding interferences. Given the application's TTFT and\nTPOT requirements, DistServe co-optimizes the resource allocation and\nparallelism strategy tailored for each phase. DistServe also places the two\nphases according to the serving cluster's bandwidth to minimize the\ncommunication caused by disaggregation. As a result, DistServe significantly\nimproves LLM serving performance in terms of the maximum rate that can be\nserved within both TTFT and TPOT constraints on each GPU. Our evaluations show\nthat on various popular LLMs, applications, and latency requirements, DistServe\ncan serve 4.48x more requests or 10.2x tighter SLO, compared to\nstate-of-the-art systems, while staying within latency constraints for > 90% of\nrequests.", + "authors": "Yinmin Zhong, Shengyu Liu, Junda Chen, Jianbo Hu, Yibo Zhu, Xuanzhe Liu, Xin Jin, Hao Zhang", + "published": "2024-01-18", + "updated": "2024-03-19", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.09213v1", + "title": "Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads", + "abstract": "Specialized accelerators such as GPUs, TPUs, FPGAs, and custom ASICs have\nbeen increasingly deployed to train deep learning models. These accelerators\nexhibit heterogeneous performance behavior across model architectures. Existing\nschedulers for clusters of accelerators, which are used to arbitrate these\nexpensive training resources across many users, have shown how to optimize for\nvarious multi-job, multi-user objectives, like fairness and makespan.\nUnfortunately, existing schedulers largely do not consider performance\nheterogeneity. In this paper, we propose Gavel, a heterogeneity-aware scheduler\nthat systematically generalizes a wide range of existing scheduling policies.\nGavel expresses these policies as optimization problems, making it easy to\noptimize for objectives in a heterogeneity-aware way, while also being\ncognizant of performance optimizations like space sharing. Gavel then uses a\nround-based scheduling mechanism to ensure jobs receive their ideal allocation\ngiven the target scheduling policy. Gavel's heterogeneity-aware policies allow\na heterogeneous cluster to sustain higher input load, and improve end\nobjectives such as average job completion time and makespan by up to 3.5x\ncompared to heterogeneity-agnostic policies.", + "authors": "Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia", + "published": "2020-08-20", + "updated": "2020-08-20", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.18677v1", + "title": "Splitwise: Efficient generative LLM inference using phase splitting", + "abstract": "Recent innovations in generative large language models (LLMs) have made their\napplications and use-cases ubiquitous. This has led to large-scale deployments\nof these models, using complex, expensive, and power-hungry AI accelerators,\nmost commonly GPUs. These developments make LLM inference efficiency an\nimportant challenge. Based on our extensive characterization, we find that\nthere are two main phases during an LLM inference request: a compute-intensive\nprompt computation, and a memory-intensive token generation, each with distinct\nlatency, throughput, memory, and power characteristics. Despite\nstate-of-the-art batching and scheduling, the token generation phase\nunderutilizes compute resources. Specifically, unlike compute-intensive prompt\ncomputation phases, token generation phases do not require the compute\ncapability of the latest GPUs, and can be run with lower power and cost.\n With Splitwise, we propose splitting the two phases of a LLM inference\nrequest on to separate machines. This allows us to use hardware that is\nwell-suited for each phase, and provision resources independently per phase.\nHowever, splitting an inference request across machines requires state transfer\nfrom the machine running prompt computation over to the machine generating\ntokens. We implement and optimize this state transfer using the fast back-plane\ninterconnects available in today's GPU clusters.\n We use the Splitwise technique to design LLM inference clusters using the\nsame or different types of machines for the prompt computation and token\ngeneration phases. Our clusters are optimized for three key objectives:\nthroughput, cost, and power. In particular, we show that we can achieve 1.4x\nhigher throughput at 20% lower cost than current designs. Alternatively, we can\nachieve 2.35x more throughput with the same cost and power budgets.", + "authors": "Pratyush Patel, Esha Choukse, Chaojie Zhang, \u00cd\u00f1igo Goiri, Aashaka Shah, Saeed Maleki, Ricardo Bianchini", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.AR", + "cats": [ + "cs.AR", + "cs.DC", + "I.2.0, I.3.1, C.4" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.13878v1", + "title": "Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism", + "abstract": "Transformer models have achieved state-of-the-art performance on various\ndomains of applications and gradually becomes the foundations of the advanced\nlarge deep learning (DL) models. However, how to train these models over\nmultiple GPUs efficiently is still challenging due to a large number of\nparallelism choices. Existing DL systems either rely on manual efforts to make\ndistributed training plans or apply parallelism combinations within a very\nlimited search space. In this approach, we propose Galvatron, a new system\nframework that incorporates multiple popular parallelism dimensions and\nautomatically finds the most efficient hybrid parallelism strategy. To better\nexplore such a rarely huge search space, we 1) involve a decision tree to make\ndecomposition and pruning based on some reasonable intuitions, and then 2)\ndesign a dynamic programming search algorithm to generate the optimal plan.\nEvaluations on four representative Transformer workloads show that Galvatron\ncould perform automatically distributed training with different GPU memory\nbudgets. Among all evluated scenarios, Galvatron always achieves superior\nsystem throughput compared to previous work with limited parallelism.", + "authors": "Xupeng Miao, Yujie Wang, Youhe Jiang, Chunan Shi, Xiaonan Nie, Hailin Zhang, Bin Cui", + "published": "2022-11-25", + "updated": "2022-11-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DB", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.14135v2", + "title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness", + "abstract": "Transformers are slow and memory-hungry on long sequences, since the time and\nmemory complexity of self-attention are quadratic in sequence length.\nApproximate attention methods have attempted to address this problem by trading\noff model quality to reduce the compute complexity, but often do not achieve\nwall-clock speedup. We argue that a missing principle is making attention\nalgorithms IO-aware -- accounting for reads and writes between levels of GPU\nmemory. We propose FlashAttention, an IO-aware exact attention algorithm that\nuses tiling to reduce the number of memory reads/writes between GPU high\nbandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of\nFlashAttention, showing that it requires fewer HBM accesses than standard\nattention, and is optimal for a range of SRAM sizes. We also extend\nFlashAttention to block-sparse attention, yielding an approximate attention\nalgorithm that is faster than any existing approximate attention method.\nFlashAttention trains Transformers faster than existing baselines: 15%\nend-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the\nMLPerf 1.1 training speed record, 3$\\times$ speedup on GPT-2 (seq. length 1K),\nand 2.4$\\times$ speedup on long-range arena (seq. length 1K-4K). FlashAttention\nand block-sparse FlashAttention enable longer context in Transformers, yielding\nhigher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on\nlong-document classification) and entirely new capabilities: the first\nTransformers to achieve better-than-chance performance on the Path-X challenge\n(seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1%\naccuracy).", + "authors": "Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher R\u00e9", + "published": "2022-05-27", + "updated": "2022-06-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.17192v2", + "title": "Fast Inference from Transformers via Speculative Decoding", + "abstract": "Inference from large autoregressive models like Transformers is slow -\ndecoding K tokens takes K serial runs of the model. In this work we introduce\nspeculative decoding - an algorithm to sample from autoregressive models faster\nwithout any changes to the outputs, by computing several tokens in parallel. At\nthe heart of our approach lie the observations that (1) hard language-modeling\ntasks often include easier subtasks that can be approximated well by more\nefficient models, and (2) using speculative execution and a novel sampling\nmethod, we can make exact decoding from the large models faster, by running\nthem in parallel on the outputs of the approximation models, potentially\ngenerating several tokens concurrently, and without changing the distribution.\nOur method can accelerate existing off-the-shelf models without retraining or\narchitecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration\ncompared to the standard T5X implementation, with identical outputs.", + "authors": "Yaniv Leviathan, Matan Kalman, Yossi Matias", + "published": "2022-11-30", + "updated": "2023-05-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.15566v1", + "title": "SpotServe: Serving Generative Large Language Models on Preemptible Instances", + "abstract": "The high computational and memory requirements of generative large language\nmodels (LLMs) make it challenging to serve them cheaply. This paper aims to\nreduce the monetary cost for serving LLMs by leveraging preemptible GPU\ninstances on modern clouds, which offer accesses to spare GPUs at a much\ncheaper price than regular instances but may be preempted by the cloud at any\ntime. Serving LLMs on preemptible instances requires addressing challenges\ninduced by frequent instance preemptions and the necessity of migrating\ninstances to handle these preemptions.\n This paper presents SpotServe, the first distributed LLM serving system on\npreemptible instances. Several key techniques in SpotServe realize fast and\nreliable serving of generative LLMs on cheap preemptible instances. First,\nSpotServe dynamically adapts the LLM parallelization configuration for dynamic\ninstance availability and fluctuating workload, while balancing the trade-off\namong the overall throughput, inference latency and monetary costs. Second, to\nminimize the cost of migrating instances for dynamic reparallelization, the\ntask of migrating instances is formulated as a bipartite graph matching\nproblem, which uses the Kuhn-Munkres algorithm to identify an optimal migration\nplan that minimizes communications. Finally, to take advantage of the grace\nperiod offered by modern clouds, we introduce stateful inference recovery, a\nnew inference mechanism that commits inference progress at a much finer\ngranularity and allows SpotServe to cheaply resume inference upon preemption.\nWe evaluate on real spot instance preemption traces and various popular LLMs\nand show that SpotServe can reduce the P99 tail latency by 2.4 - 9.1x compared\nwith the best existing LLM serving systems. We also show that SpotServe can\nleverage the price advantage of preemptive instances, saving 54% monetary cost\ncompared with only using on-demand instances.", + "authors": "Xupeng Miao, Chunan Shi, Jiangfei Duan, Xiaoli Xi, Dahua Lin, Bin Cui, Zhihao Jia", + "published": "2023-11-27", + "updated": "2023-11-27", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.05920v1", + "title": "Fast Distributed Inference Serving for Large Language Models", + "abstract": "Large language models (LLMs) power a new generation of interactive AI\napplications exemplified by ChatGPT. The interactive nature of these\napplications demand low job completion time (JCT) for model inference. Existing\nLLM serving systems use run-to-completion processing for inference jobs, which\nsuffers from head-of-line blocking and long JCT. We present FastServe, a\ndistributed inference serving system for LLMs. FastServe exploits the\nautoregressive pattern of LLM inference to enable preemption at the granularity\nof each output token. FastServe uses preemptive scheduling to minimize JCT with\na novel skip-join Multi-Level Feedback Queue scheduler. Based on the new semi\ninformation-agnostic setting of LLM inference, the scheduler leverages the\ninput length information to assign an appropriate initial queue for each\narrival job to join. The higher priority queues than the joined queue are\nskipped to reduce demotions. We design an efficient GPU memory management\nmechanism that proactively offloads and uploads intermediate states between GPU\nmemory and host memory for LLM inference. We build a system prototype of\nFastServe based on NVIDIA FasterTransformer. Experimental results show that\ncompared to the state-of-the-art solution Orca, FastServe improves the average\nand tail JCT by up to 5.1$\\times$ and 6.4$\\times$, respectively.", + "authors": "Bingyang Wu, Yinmin Zhong, Zili Zhang, Gang Huang, Xuanzhe Liu, Xin Jin", + "published": "2023-05-10", + "updated": "2023-05-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.07104v1", + "title": "Efficiently Programming Large Language Models using SGLang", + "abstract": "Large language models (LLMs) are increasingly used for complex tasks\nrequiring multiple chained generation calls, advanced prompting techniques,\ncontrol flow, and interaction with external environments. However, efficient\nsystems for programming and executing these applications are lacking. To bridge\nthis gap, we introduce SGLang, a Structured Generation Language for LLMs.\nSGLang is designed for the efficient programming of LLMs and incorporates\nprimitives for common LLM programming patterns. We have implemented SGLang as a\ndomain-specific language embedded in Python, and we developed an interpreter, a\ncompiler, and a high-performance runtime for SGLang. These components work\ntogether to enable optimizations such as parallelism, batching, caching,\nsharing, and other compilation techniques. Additionally, we propose\nRadixAttention, a novel technique that maintains a Least Recently Used (LRU)\ncache of the Key-Value (KV) cache for all requests in a radix tree, enabling\nautomatic KV cache reuse across multiple generation calls at runtime. SGLang\nsimplifies the writing of LLM programs and boosts execution efficiency. Our\nexperiments demonstrate that SGLang can speed up common LLM tasks by up to 5x,\nwhile reducing code complexity and enhancing control.", + "authors": "Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff Huang, Chuyue Sun, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, Ying Sheng", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.PL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.17323v2", + "title": "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", + "abstract": "Generative Pre-trained Transformer models, known as GPT or OPT, set\nthemselves apart through breakthrough performance across complex language\nmodelling tasks, but also by their extremely high computational and storage\ncosts. Specifically, due to their massive size, even inference for large,\nhighly-accurate GPT models may require multiple performant GPUs, which limits\nthe usability of such models. While there is emerging work on relieving this\npressure via model compression, the applicability and performance of existing\ncompression techniques is limited by the scale and complexity of GPT models. In\nthis paper, we address this challenge, and propose GPTQ, a new one-shot weight\nquantization method based on approximate second-order information, that is both\nhighly-accurate and highly-efficient. Specifically, GPTQ can quantize GPT\nmodels with 175 billion parameters in approximately four GPU hours, reducing\nthe bitwidth down to 3 or 4 bits per weight, with negligible accuracy\ndegradation relative to the uncompressed baseline. Our method more than doubles\nthe compression gains relative to previously-proposed one-shot quantization\nmethods, preserving accuracy, allowing us for the first time to execute an 175\nbillion-parameter model inside a single GPU for generative inference. Moreover,\nwe also show that our method can still provide reasonable accuracy in the\nextreme quantization regime, in which weights are quantized to 2-bit or even\nternary quantization levels. We show experimentally that these improvements can\nbe leveraged for end-to-end inference speedups over FP16, of around 3.25x when\nusing high-end GPUs (NVIDIA A100) and 4.5x when using more cost-effective ones\n(NVIDIA A6000). The implementation is available at\nhttps://github.com/IST-DASLab/gptq.", + "authors": "Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh", + "published": "2022-10-31", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.11514v2", + "title": "HexGen: Generative Inference of Large-Scale Foundation Model over Heterogeneous Decentralized Environment", + "abstract": "Serving generative inference of the large-scale foundation model is a crucial\ncomponent of contemporary AI applications. This paper focuses on deploying such\nservices in a heterogeneous and decentralized setting to mitigate the\nsubstantial inference costs typically associated with centralized data centers.\nTowards this end, we propose HexGen, a flexible distributed inference engine\nthat uniquely supports the asymmetric partition of generative inference\ncomputations over both tensor model parallelism and pipeline parallelism and\nallows for effective deployment across diverse GPUs interconnected by a fully\nheterogeneous network. We further propose a sophisticated scheduling algorithm\ngrounded in constrained optimization that can adaptively assign asymmetric\ninference computation across the GPUs to fulfill inference requests while\nmaintaining acceptable latency levels. We conduct an extensive evaluation to\nverify the efficiency of HexGen by serving the state-of-the-art Llama-2 (70B)\nmodel. The results suggest that HexGen can choose to achieve up to 2.3 times\nlower latency deadlines or tolerate up to 4 times more request rates compared\nwith the homogeneous baseline given the same budget.", + "authors": "Youhe Jiang, Ran Yan, Xiaozhe Yao, Yang Zhou, Beidi Chen, Binhang Yuan", + "published": "2023-11-20", + "updated": "2024-02-04", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.06180v1", + "title": "Efficient Memory Management for Large Language Model Serving with PagedAttention", + "abstract": "High throughput serving of large language models (LLMs) requires batching\nsufficiently many requests at a time. However, existing systems struggle\nbecause the key-value cache (KV cache) memory for each request is huge and\ngrows and shrinks dynamically. When managed inefficiently, this memory can be\nsignificantly wasted by fragmentation and redundant duplication, limiting the\nbatch size. To address this problem, we propose PagedAttention, an attention\nalgorithm inspired by the classical virtual memory and paging techniques in\noperating systems. On top of it, we build vLLM, an LLM serving system that\nachieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV\ncache within and across requests to further reduce memory usage. Our\nevaluations show that vLLM improves the throughput of popular LLMs by\n2-4$\\times$ with the same level of latency compared to the state-of-the-art\nsystems, such as FasterTransformer and Orca. The improvement is more pronounced\nwith longer sequences, larger models, and more complex decoding algorithms.\nvLLM's source code is publicly available at\nhttps://github.com/vllm-project/vllm", + "authors": "Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, Ion Stoica", + "published": "2023-09-12", + "updated": "2023-09-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.14464v1", + "title": "Tree of Reviews: A Tree-based Dynamic Iterative Retrieval Framework for Multi-hop Question Answering", + "abstract": "Multi-hop question answering is a knowledge-intensive complex problem. Large\nLanguage Models (LLMs) use their Chain of Thoughts (CoT) capability to reason\ncomplex problems step by step, and retrieval-augmentation can effectively\nalleviate factual errors caused by outdated and unknown knowledge in LLMs.\nRecent works have introduced retrieval-augmentation in the CoT reasoning to\nsolve multi-hop question answering. However, these chain methods have the\nfollowing problems: 1) Retrieved irrelevant paragraphs may mislead the\nreasoning; 2) An error in the chain structure may lead to a cascade of errors.\n In this paper, we propose a dynamic retrieval framework called Tree of\nReviews (ToR), where the root node is the question, and the other nodes are\nparagraphs from retrieval, extending different reasoning paths from the root\nnode to other nodes. Our framework dynamically decides to initiate a new\nsearch, reject, or accept based on the paragraphs on the reasoning paths.\nCompared to related work, we introduce a tree structure to handle each\nretrieved paragraph separately, alleviating the misleading effect of irrelevant\nparagraphs on the reasoning path; the diversity of reasoning path extension\nreduces the impact of a single reasoning error on the whole. We conducted\nexperiments on three different multi-hop question answering datasets. The\nresults show that compared to the baseline methods, ToR achieves\nstate-of-the-art performance in both retrieval and response generation. In\naddition, we propose two tree-based search optimization strategies, pruning and\neffective expansion, to reduce time overhead and increase the diversity of path\nextension. We will release our code.", + "authors": "Li Jiapeng, Liu Runze, Li Yabo, Zhou Tong, Li Mingling, Chen Xiang", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Supervised Multi-hop Question Answering Some researchers have investigated iterative retrieval for multi-hop question answering in fully supervised settings. Das et al. (2019) generate a new query representation by utilizing the current query and the current state of the reader, and initiate iterative retrieval. Feldman and El-Yaniv (2019) adopt a similar approach, in which a fusion module is designed in the new query generation stage to ensure sufficient interaction. Qi et al. (2019) employ a supervised generator to generate new queries based on the query and historical passages, and iteratively conduct retrieval. Nakano et al. (2021) utilize GPT3 to answer long-form questions by simulating human browsing behavior. Although supervised models perform well on multi-hop datasets 2 like HotpotQA, they rely on expensive manual annotation and specific training. However, in practical application scenarios like New Bing and PerPlexity.AI, the indexed document scope is broader, and the retrieval source is updated in real-time. In this case, the supervised models are likely to fail. Retrieval-Augmentation for Complex Problems The Retrieval-Augmented Generation (RAG) system typically retrieves additional knowledge from specific corpora, such as Wikipedia, to alleviate the hallucination problem of Large Language Models (LLMs), thereby significantly enhancing the performance of LLMs in various tasks (Lewis et al., 2020; Guu et al., 2020; Ram et al., 2023). Early research on RAG typically employs a onestep retrieval approach, which is ineffective in addressing composite problems. To tackle composite problems, Self-Ask (Press et al., 2023) poses sub-questions before answering the main question, optimizing complex composite problems through multiple retrievals.IRCoT (Trivedi et al., 2023) triggers retrieval on each sentence of the CoT. ITERRETGEN (Shao et al., 2023) connects the complete CoT reasoning steps generated in the previous turn with the original question for the next turn\u2019s generation query. However, these methods all adopt a chain-like structure for reasoning. If an error occurs at any step in the reasoning path, it could potentially cause the reasoning path to deviate. Tree-like Reasoning for Complex Problems The tree is an efficient structure for solving complex reasoning problems (Yao et al., 2023a). Tree of Thought(ToT) enhances the problem-solving capabilities of Large Language Models (LLMs) by introducing a tree-like structure during the reasoning process, simulating the human problem-solving process. This allows the model to consider multiple reasoning paths and self-evaluate to decide the following action. Asai et al. (2020) trained a retriever that dynamically retrieves information from Wikipedia graphs. However, this method relied on a hyperlink graph constructed from Wikipedia, which fails when the path related to the problem is not included. Some researchers decompose complex problems into a static problem tree with several sub-problems. Then, answer each sub-problem by utilizing language models and additional retrieval information (Cao et al., 2023) or calculating the probability of reasoning paths (Zhang et al., 2023), ultimately solving the complex problem. However, the decomposition of the question and the construction of the tree lack the assistance of external knowledge and information on the reasoning path, which can easily lead to incorrect decomposition, possibly affecting the correctness of the final answer. In contrast, our work is the first to propose a retrieval framework that uses a tree-like structure to dynamically initiate requests based on external knowledge and information on the reasoning path. LLMs can decide dynamically whether to initiate further retrieval and what requests to generate based on this information. We have designed two search optimization strategies to reduce the time overhead of tree structure searching and enhance the diversity of initiating requests: pruning and effective expansion.", + "pre_questions": [], + "main_content": "Introduction Large Language Models (LLMs) have demonstrated the capacity for multi-step reasoning (Wei et al., 2022). This is achieved by generating intermediate reasoning steps, a process known as Figure 1: The chain-like iterative retrieval process faces the issue of error accumulation. The example shows how irrelevant retrieval results affect subsequent retrievals and the generation of new queries, ultimately leading to incorrect answers. the chain of thoughts (CoT) (Kojima et al., 2022). However, despite their advanced reasoning capabilities, LLMs are sometimes prone to generating incorrect reasoning steps. These inaccuracies can be attributed to the lack of current knowledge within their parameters or the erroneous retrieval of information encoded in their weights (Maynez et al., 2020). In response to this issue, arguments for LLMs with knowledge from external data sources have emerged as a promising approach, attracting increased attention from researchers (Shi et al., 2023; Jiang et al., 2023; Trivedi et al., 2023). In some typical question-answering tasks, retrieval-augmented language models utilize a onetime retrieval method (Izacard et al., 2023; Lewis et al., 2020). However, these methods are not satisfied for multi-hop questions, necessitating a more nuanced approach to acquiring comprehensive knowledge. Such questions often involve indirect facts that may exhibit minimal lexical or semantic correlation with the question but are es1 arXiv:2404.14464v1 [cs.CL] 22 Apr 2024 sential for reaching accurate answers. For instance, to answer the question,\u2018According to the 2001 census, what was the population of the city in which Kirton End is located?\u2019. First, we need to know that \u2019Kirton End is located in Boston\u2019, then look up \u2019the population of Boston according to the 2001 census\u2019. This process highlights the necessity of iterative retrieval, underscoring the limitations of one-time retrieval strategies in addressing complex informational needs. Iterative retrieval involves conducting multiple turns of retrieval, each guided by newly generated sub-questions (Press et al., 2023), the most recent response (Shao et al., 2023), or an intermediate reasoning step (Trivedi et al., 2023). As illustrated in Figure 1, these methods employ a sequential, chain-like process alternating between retrieval and query generation. While these approaches demonstrate superior performance compared to one-time retrieval, the chain-like nature of the process is susceptible to cascading errors. A local error at any step, whether due to inappropriate retrieval or query generation, can affect subsequent steps, culminating in incorrect responses. Such errors underscore the inherent vulnerability of iterative retrieval methods, highlighting a critical challenge in achieving reliable knowledge extraction. To this end, drawing inspiration from previous work that leveraged a tree-like reasoning process to enhance reasoning capability (Yao et al., 2023a), this paper introduces TREE OF REVIEWS(TOR) a dynamic, tree-based iterative retrieval framework. We follow the retrieve-and-read paradigm (Zhu et al., 2021), the retriever initially retrieves knowledge for the question, and the reader utilizes this retrieved knowledge to generate a response. In detail, we construct a tree with the initial question serving as the root and individual paragraphs as other nodes during the retrieval phase. Each node contains a single paragraph, mitigating the risk of diverging the reasoning process due to irrelevant information. A paragraph review block within this structure evaluates each node to determine the subsequent action\u2014further retrieval, acceptance, or rejection. Each accepted path is referred to as a piece of evidence. We propose three evidence fusion methods, allowing the reader to utilize evidence from various paths to generate the final response. Incorporating a tree structure into the retrieval process has enhanced the performance of paragraph retrieval and answer accuracy. To further enhance the search efficiency of ToR, we advocate for two tree-based search optimization strategies: pruning, which aims to diminish the frequency of unproductive search initiations, and effective expansion, designed to refine query generation for improved retrieval paragraphs. Experiments on three different multi-hop question answering datasets show that our proposed method achieves state-of-the-art performance in both retrieval and response generation. Our contributions include: \u2022 We propose a dynamic retrieval framework named TREE OF REVIEWS (TOR), which integrates a tree structure into the iterative retrieval process. This method mitigates the negative impact associated with the inherent vulnerabilities of chain-like retrieval methods. \u2022 We propose two tree-based search optimization strategies: pruning and effective expansion. These strategies demonstrate significant improvements in both retrieval quality and efficiency. These efforts offer valuable insights for the optimization of iterative retrieval methods. \u2022 Our method achieves state-of-the-art performance in both retrieval and response generation on three different multi-hop question answering datasets. Extensive experiments have conclusively demonstrated the effectiveness of our method. 3.1 Overall The task is to answer a multi-hop question Q based on a retrieval corpus D. As illustrated in Figure 2, we introduce TREE OF REVIEWS(TOR), a treebased dynamic retrieval framework. In this framework, the root node is the question Q, while each subsequent node is a paragraph with a paragraphs review block (Sec.3.2). These blocks dynamically judge whether to stop or continue the search based on all paragraphs along the path from the root node to the current node. If there are enough paragraphs to answer Q, the model will use them to produce evidence and add it to the evidence pool. Upon reviewing all paths, the reader generates the final response to Q based on the evidence in the evidence pool. We propose three evidence fusion strategies (Sec.3.3) to fully use information from diverse reasoning paths. Additionally, we propose two treebased search optimization methods (Sec.3.4): pruning and effective expansion. These methods aim to enhance the search efficiency of the TOR framework. 3.2 Paragraphs Review The TOR framework initiates multiple retrievals, utilizing both the original question Q and new queries generated by the paragraphs review block. We use a dense retriever proposed by Izacard et al. (2023) to encode both the query and the retrieval corpus D and then compute the similarity score of their embeddings by cosine similarity. We expand a child node for each retrieved 3 Figure 2: The left side illustrates the overall TOR framework (introduced in Section 3.1). The upper right half illustrates Paragraphs Review a fundamental component of TOR (introduced in Section 3.2). The lower right half illustrates Evidence Fusion a method for more effectively utilizing retrieved information for reading (introduced in Section 3.3). paragraph individually. The paragraphs review block selects an action based on the question Q and paragraphs along the path. The block is designed to execute the following steps: (i) Judging whether the paragraphs are relevant to the question Q. Relevance means that paragraphs contain partial information to answer the question. (ii) Judging whether the paragraphs have enough information, which means that paragraphs contain all information to answer the question (iii) If it is not relevant, then action selection is [Reject] then stop search; if it is relevant but not enough, action selection is [Search] then generate a new query and retrieval with it; if it is relevant and enough, then action selection is [Accept] then stop search. Once LLM accepts a reasoning path, it will be asked to answer the original question Q based on all the documents on this reasoning path. This answer is called a brief analysis. The accepted reasoning path combined with the brief analysis form a piece of evidence. Utilizing the method above, we iteratively execute retrieval and review through the depth-first search until each reasoning path is either accepted, rejected, or reaches its maximum search depth. 3.3 Evidence Fusion The evidence pool contains some pieces of evidence. The QA reader will generate a response according to the evidence pool. We propose three simple methods for evidence fusion: Analysis-based Fusion: The reader generates a response only according to the brief analysis. Paragraph-Based Fusion: The reader generates a response only according to the paragraphs. Evidence-based fusion: The reader generates a response according to both of them. See Table 10 to 12 for details. 3.4 Tree-Based Search Optimization Although the tree structure can explore more diverse reasoning paths and reduce failures caused by a single reasoning path, it introduces longer time overheads. Therefore, we propose pruning and effective expansion to reduce redundancy and irrelevant expansion in the search process while guaranteeing expansion diversity. Pruning aims to reduce the initiation of invalid searches. We propose two methods: Relevance Pruning and Repetitive Pruning. Relevance pruning is conducted at the paragraphs review block, where the model judges whether the paragraphs are relevant to the question and subse4 Method GPT-3.5-Turbo GPT-4-Turbo HotpotQA 2WikiMQA MuSiQue HotpotQA 2WikiMQA MuSiQue OneR 44.3 45.8 23.2 44.3 45.8 23.2 ReAct 44.6 48.0 25.2 51.3 46.1 35.5 Self-AsK 44.0 50.7 25.9 52.9 59.5 37.2 ITER-RETGEN 50.6 51.1 27.2 60.5 67.4 47.0 IRCoT 46.0 46.5 25.2 53.3 53.9 36.5 CoR 47.9 47.6 25.8 61.0 62.4 39.4 ToR 53.1 51.8 29.5 73.8 79.4 48.5 Table 1: Paragraphs recall@15 on multi-hop question answering datasets. We highlight the best results in bold and underline the best results among other methods. quently prunes the paths expanded from irrelevant paragraphs. Repetitive pruning is conducted after retrieval, where it matches the paragraph ID of retrieved paragraphs and received paragraphs in the evidence pool. If any retrieved paragraph is already in the evidence pool, it is pruned. Effective expansion aims to optimize the effectiveness and diversity of paragraphs review block initiating new queries. We adopt CoT Expansion and Missing Paragraph Completion Expansion (MPC). CoT expansion allows the model to think step by step, identify missing information in current paragraphs and generate a new query based on this missing information. MPC expansion enables the model to complete the missing information in paragraphs using its internal knowledge and to use the newly generated paragraph as a new query. See Table 7 and Table 9 for details. 4 Experiments 4.1 Datasets We conducted experiments on three multi-hop reasoning datasets: HotpotQA with Fullwiki setting (Yang et al., 2018), 2Wiki-MultiHopQA (Ho et al., 2020), and the answerable subset of MuSiQue (Trivedi et al., 2022). For HotpotQA and 2WikiMultiHopQA, we used the Wikipedia dump from December 2018 as the retrieval source, while for MuSiQue, we used the Wikipedia dump from December 2021. Following the work of previous researchers (Shao et al., 2023), we used the first 500 questions from the development sets of these datasets for retrieval and response generation performance evaluation in Table 1 and Table 2. Then, randomly selected 100 questions from the remaining part for hyperparameter tuning in Table 3 to Table 6. 4.2 Evaluate Setting We evaluated TOR from retrieval quality and generation quality. For the retrieval metric, we followed Trivedi et al. (2023), allowing different retrieval systems to return up to 15 paragraphs and calculating the recall of golden paragraphs; this is referred to as recall@151. We used exact match (EM) and F1 score for the generation metric. 4.3 Baselines Given the disparity in retriever, reader, and test samples used by the baseline methods, a fair comparison becomes challenging. Therefore, we followed Shao et al. (2023) used Contriver (Izacard et al., 2023) as our retriever. GPT-3.5-Turbo(gpt3.5-turbo-0125) and GPT-4-Turbo(gpt-4-1106preview) (Ouyang et al., 2022; OpenAI, 2023) were used as the base models to implement the following baseline methods. The format of prompts and few-shot settings are adopted as presented in their papers. We retrieved top-5 paragraphs for each query, and for baselines involving multi-turn iterations, we set the maximum number of turns to 3. Direct Prompting (Brown et al., 2020) prompts a Language Language Model (LLM) to generate the final answer directly. CoT prompting (Wei et al., 2022)prompts a Language Language Model (LLM) to generate the final answer step by step. One-step Retrieval with Direct/ CoT prompting (OneR-Direct/ CoT) augments Direct/CoT 1In this work if the number of retrieved paragraphs exceeds 15, we re-rank the evidence in the evidence pool based on the similarity between the evidence and the final response. We select the top 15 paragraphs according to their similarity scores. 5 Method GPT-3.5-Turbo GPT-4-Turbo HotpotQA 2WikiMQA MuSiQue HotpotQA 2WikiMQA MuSiQue EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 Without Retrieval Direct 28.2 37.7 27.6 31.8 9.6 18.2 40.6 52.2 38.4 47.6 25.9 36.0 CoT 27.8 38.8 26.7 33.6 17.2 25.1 39.6 53.3 42.2 51.8 24.0 36.8 With Retrieval OneR-Direct 25.0 33.4 20.6 23.8 5.0 10.3 40.2 53.6 32.6 42.4 26.2 37.4 OneR-CoT 24.8 32.1 14.0 18.7 5.0 11.2 39.6 52.3 36.6 47.0 22.8 34.9 ReAct 25.8 37.2 14.6 24.3 2.2 7.7 34.8 47.5 37.7 48.2 26.4 38.0 Self-AsK 23.4 32.6 15.4 22.3 5.6 12.5 39.1 51.3 41.1 52.6 28.8 40.6 ITER-RETGEN 25.8 36.7 16.6 23.0 9.4 16.7 46.2 58.9 39.8 51.0 31.4 43.3 IRCoT 29.8 38.9 26.4 30.4 8.2 15.0 42.8 53.9 39.2 49.3 28.6 40.1 CoR 30.6 39.8 25.2 28.8 9.2 16.8 41.7 55.3 43.6 54.1 26.6 38.4 ToR 38.2 50.4 29.0 37.0 13.2 22.1 49.2 63.1 51.0 62.9 30.9 43.6 Table 2: Answer EM and F1 results on multi-hop question answering datasets. We highlight the best results in bold and underline the best results among other methods. Prompting with paragraphs retrieved by the retriever. ReAct/Self-Ask (Yao et al., 2023b; Press et al., 2023) iteratively execute the following steps : (i) Initiate retrieval using the follow-up question generated by the LLM, returning relevant paragraphs, and (ii) Respond to the follow-up question, subsequently deciding whether to generate the next question or finalize the answer. The primary distinction between ReAct and Self-Ask in our implementation lies in the positioning of the retrieved paragraphs. ITER-RETGEN (Shao et al., 2023) iteratively execute the following steps for several turns:(i) Initiate retrieval using the original question and response generated by the LLM, returning relevant paragraphs, and (ii) Answer the original question with the current turn retrieval paragraphs. Finally, take the last round\u2019s response as an answer. IRCoT (Trivedi et al., 2023) iteratively execute the following steps: (i) Initiate retrieval using the CoT sentence generated by the LLM, returning relevant paragraphs, and (ii) Generate a new CoT sentence using historical information until a special trigger word is produced or the maximum number of turns is reached. Finally, use the historical retrieval paragraphs to answer the original question. 4.4 Implementation Details We also employed Contriver, GPT-3.5-Turbo, and GPT-4-Turbo for our experiments. We adopted a greedy decoding strategy to ensure the stability of the output. We set the maximum length to 4096 and added as much evidence from the pool without exceeding this limit. We randomly sampled several data from each dataset\u2019s training set, manually annotated them for few-shot demonstrations, and adopted a 3-shot setting for all baselines and ours. In the main experiment, the depth of TOR is set to 3, with the number of nodes in each layer being 5, 3, 3. We adopted the evidence-based fusion method, missing paragraph completion expansion strategy, and two kinds of pruning strategies. CoR: To compare the differences between the tree and chain structures, we designed an experiment using the same prompt as TOR but providing only a single reasoning path. The model chooses an action in each iteration and generates new queries based on all retrieved paragraphs. See Table 13 for details. 4.5 Main Result As shown by Table 1 and Table 2, our method TOR achieves nearly optimal performance in both retrieval metrics and generation metrics across three datasets under two different base models. In the experiments with GPT-4-Turbo as the base model 6 for the three datasets, the retrieval metrics outperform the best-performing baseline method ITERRETGEN by 13.3%, 12.0%, and 1.5%, respectively. Meanwhile, the F1 values for response generation surpass the highest values among the various baseline methods, with improvements of 9.2%, 10.3%, and 0.3%, respectively. We consider three reasons for achieving these results: 1) TOR allows the model to explore multiple reasoning paths, effectively mitigating the cascading errors caused by single reasoning path mistakes. IRCoT, ITER-RETGEN, and CoR (introduced in Section 4.4) are all based on chain-ofthought reasoning, and their final response quality is constrained by the accuracy of retrieval and reasoning at each step along the path. In contrast, TOR employs a tree structure to expand into different paths, sharing the risk of retrieval and reasoning failures. 2) The TOR structure can effectively reduce the interference of useless information. IRCoT, ITER-RETGEN, and CoR utilize all retrieved paragraphs during the reasoning process, and the useless information contained therein may lead to reasoning errors. We reduce the impact of useless information on retrieval and reasoning along the path by two pruning strategies. 3) TOR enhances the generation quality by improving the quality of retrieval results. Combining the results from Table 1 and Table 2, we find that retrieval metrics are positively correlated with generation metrics. Therefore, our method improves the final generation quality by enhancing the system\u2019s retrieval performance. Although we adopted the same prompts and experimental settings as in the baseline papers, the results of some baselines on GPT-3.5-Turbo still do not perform well. We speculate that the main reason for this performance gap is the scale of the model parameter. According to the API call prices, GPT-3.5-Turbo costs $0.5/1M tokens for input and $1.5/1M tokens for output, and text-davinci-003 costs $20.0/1M tokens. Based on this, we can infer that the parameter scale of gpt-3.5-turbo is much smaller than that of text-davinci-003. The evidence fusion strategies can enhance the performance of the reader. As shown by Table 3, generating the final answer based on both retrieved paragraphs and analysis yields optimal performance, demonstrating the effectiveness of our search process. A significant gap exists between performance derived from analysis and those from paragraphs, indicating that when there are Method HotpotQA 2WikiMQA MuSiQue Analysis 56.8 52.0 40.4 Paragraph 64.6 62.3 46.0 Evidence 65.5 63.7 46.2 Table 3: Answer F1 with different evidence fusion strategies. conflicts between different pieces of evidence, the model needs to incorporate information from the retrieved paragraphs to better resolve the contradictions, while the information that analysis can provide is limited. Method HotpotQA 2WikiMQA MuSiQue Recall F1 Recall F1 Recall F1 Direct 61.5 57.8 58.8 53.6 42.7 39.0 CoT 66.2 60.4 62.8 56.9 43.0 41.3 MPC 74.6 65.5 79.3 63.7 49.9 46.2 Table 4: Paragraphs Recall@15 and Answer F1 with different effective expansion strategies. Direct represents the approach of not using effective expansion and generating a new query directly. CoT represents the approach of using CoT Expansion. MPC represents the approach of using Missing Paragraph Completion Expansion. Effective expansion strategies significantly enhance the performance of retrieval. As shown by Table 4, our proposed strategies surpass the baseline strategy, demonstrating their superiority in guiding the search direction by controlling the queries used for retrieval. The performance improvement observed with the CoT underscores the significance of incorporating reasoning capabilities into iterative retrieval processes. Notably, the MPC strategy exhibits the best performance, which may be attributed to the extensive knowledge stored in recent LLMs. This confirms that utilizing both parametric and non-parametric information during the retrieval process can improve retrieval and generation performance (Yu et al., 2023; Sun et al., 2023). The pruning strategies ensure performance while reducing time cost. As shown by Table 5, repetitive pruning improves the effective call rate, significantly reducing the time of API calls for the same paragraph and lowering the time cost. Without repetitive pruning, the framework can retrieve more different paragraphs and obtain more evidence through node expansion, which leads to 7 Method #API #Doc Rate #Evidence Recall@15 EM F1 ToR 16.9 15.7 92.9 2.9 79.3 51.6 63.7 w/o repetitive pruning 33.5 18.3 54.6 3.7 76.4 51.4 63.8 w/o relevance pruning 29.1 24.6 84.5 3.3 73.2 48.9 59.3 w/o both 65.0 31.8 48.9 4.4 72.9 49.1 59.4 Table 5: Results of different pruning strategies on 2WikiMQA, #API represents the average number of GPT API calls. #Doc represents the average number of different paragraphs retrieved. Rate = #Doc/#API, which means the number of reviewed paragraphs per API call, where a higher value indicates more effective API calls (the higher, the better).#Evidence represents the average number of evidence in the evidence pool. Other metrics are introduced in Section 4.2. TOR use both repetitive pruning and relevance pruning. w/o repetitive pruning only uses relevance pruning, and w/o relevance pruning only uses repetitive pruning. w/o, both don\u2019t use any pruning strategies. Depth width #API Rate #Doc #Evidence Recall@15 EM F1 2 5,3 10.3 10.0 97.1 1.8 69.7 44.3 55.6 3 5,3,3 16.9 15.7 92.9 2.9 79.3 51.6 63.7 4 5,3,3,3 36.3 27.6 76.0 4.7 79.7 51.8 64.4 3 10,5,3 41.3 39.2 94.9 6.8 75.4 52.4 66.0 Table 6: Results for different tree depths and widths on 2WikiMQA. a decrease in the Recall@15 metric. This is because repetitive paragraphs do not provide information gain through node expansion, and the obtained evidence cannot offer additional effective paragraphs, potentially introducing invalid paragraphs that lower retrieval metrics. Relevance pruning filters out irrelevant paragraphs, reducing ineffective expansion. Without relevance pruning, the framework initiates node expansion for each paragraph. Although this approach can retrieve more different paragraphs, the evidence obtained does not significantly increase, as the retrieval initiated by irrelevant paragraphs does not directly contribute to problem-solving. Additionally, introducing such misleading information may cause the model to generate erroneous reasoning, decreasing Recall@15, EM, and F1 metrics. The depth and width of the tree affect the performance. As shown by Table 6, we conducted the experiment at different tree depths and widths and drew the following conclusions: 1) As the tree depth increases, our framework retrieves more paragraphs and obtains more evidence, leading to an improvement in retrieval and generation metrics. However, the number of calls also increases non-linearly. This is because our framework generates more feasible paths through node expansion. As this expansion grows exponentially with the increase in tree depth, we need to reasonably limit the depth of the tree to ensure search efficiency. 2) The effective call rate decreases with the deepening of the tree depth. Even though repetitive pruning reduces the repetitive calls of accepted paragraphs, it cannot avoid some unaccepted paragraphs being reviewed multiple times. This phenomenon is amplified with the increase in tree depth. 3) By expanding the breadth of each tree layer, our framework can retrieve more paragraphs and obtain more evidence while ensuring an effective call rate. Notably, its retrieval metrics decrease while its generation metrics improve. We think that the evidence\u2019s proportion of ground truth paragraphs decreases as the breadth increases, leading to fewer recalled ground truth paragraphs at a specific quantity. However, the reader can add more evidence (more than 15) for response generation, thus improving the generation metrics. 4) To balance performance and time cost, we ultimately chose a depth of 3 and widths of 5, 3, and 3. 5 Conclusion This paper proposes TOR, a tree-structured dynamic retrieval framework for multi-hop questionanswering tasks. This framework leverages the tree structure and the chain-of-thought capability of Large Language Models(LLMs) to dynamically explore multiple feasible reasoning paths. Experimental results demonstrate that the method effectively explores more diverse reasoning paths while 8 reducing ineffective path expansion. We believe that TOR can serve as a robust baseline model for future research in multi-hop question-answering tasks. Moreover, we hope our framework can be extended to more complex reasoning tasks. Limitations TOR has requirements for the capabilities of the base models, including 1) The model should have zero-shot or few-shot CoT reasoning abilities. 2) The model should support long-text inputs, as we need to include retrieved paragraphs and few-shot demonstrations in the prompts. 3) The model should have good instruction-following capabilities, as Paragraph Reviews require the model to output intermediate results step-by-step according to the instructions. The model needs to understand the instructions and output in a specific format. Regarding the results returned by LLM, we will parse them according to its prompt. The parsing will fail if the model fails to generate results in that format. The parsing success rate represents LLM\u2019s ability to follow complex instructions. Models that meet our requirements tend to have a larger number of parameters. In contrast, smaller models (with fewer than 20B parameters) often lack satisfactory instruction-following capabilities for our tasks. (with the parsing success rates of output below 85%, compared to 98.6% for GPT-3.5-Turbo). This limits the generality of our method. However, as large language models continue to develop, smaller models will meet the above requirements, enhancing our approach\u2019s practicality. TOR incurs a significant time cost, as our framework calls the LLM at each node, which improves retrieval performance but introduces additional computational overhead. Although we have designed two different pruning strategies to alleviate this issue, an average of 16 LLM calls still exists. In future work, we plan to optimize the framework in the following ways: 1) Implement a more finegrained repetitive pruning strategy, which involves pruning repetitive paragraphs from multiple perspectives, such as semantic similarity. 2) Develop a more powerful retriever: the experimental results show that reducing tree depth and width can effectively decrease the number of calls, and a more powerful retriever can recall relevant paragraphs more effectively, allowing for a reduction in tree depth and width. 3) Introduce an early termination mechanism: the framework would dynamically choose to terminate the tree search early when the LLM believes sufficient evidence has already been obtained. Moreover, akin to several other baseline methods with which we have drawn comparisons, our experiments employed the OpenAI LLM API. Owing to the deprecation of the text-davinci-002 API employed by IRCoT and the text-davinci-003 API employed by ITER-RETGEN, we could not employ identical models for a fair comparison. To contrast their approaches, we conducted experiments using the gpt-4-1106-preview and gpt-3.5-turbo-0125 APIs. Although we used the prompts reported in the baseline studies, the issues about prompt transferability precluded a guarantee of fully replicating the effects of their methods. Recognizing that the APIs we have employed may also be deprecated at some point in the future, we intend to release all prompts and code to make our research easier to replicate for future study. Lastly, the performance of TOR on other complex reasoning tasks still requires further verification. We have only validated the effectiveness of the TOR framework on the multi-hop questionanswering task. We believe that introducing a treelike structure in complex reasoning tasks is a viable approach, and we hope that future work can leverage this concept to achieve favorable results in a broader array of complex reasoning tasks. Ethical Considerations It is well known that Large Language Models(LLMs) suffer from hallucination, privacy, security, and bias during their usage. Although TOR employs retrieval augmentation that can alleviate the hallucination problem to some extent, it still cannot fully address these issues. Moreover, our framework does not consider bias, security, and privacy concerns. If our framework is to be deployed in practical application scenarios, certain restrictions should be implemented to prevent generating harmful information. 9" + }, + { + "url": "http://arxiv.org/abs/2210.03350v3", + "title": "Measuring and Narrowing the Compositionality Gap in Language Models", + "abstract": "We investigate the ability of language models to perform compositional\nreasoning tasks where the overall solution depends on correctly composing the\nanswers to sub-problems. We measure how often models can correctly answer all\nsub-problems but not generate the overall solution, a ratio we call the\ncompositionality gap. We evaluate this ratio by asking multi-hop questions with\nanswers that require composing multiple facts unlikely to have been observed\ntogether during pretraining. In the GPT-3 family of models, as model size\nincreases we show that the single-hop question answering performance improves\nfaster than the multi-hop performance does, therefore the compositionality gap\ndoes not decrease. This surprising result suggests that while more powerful\nmodels memorize and recall more factual knowledge, they show no corresponding\nimprovement in their ability to perform this kind of compositional reasoning.\n We then demonstrate how elicitive prompting (such as chain of thought)\nnarrows the compositionality gap by reasoning explicitly. We present a new\nmethod, self-ask, that further improves on chain of thought. In our method, the\nmodel explicitly asks itself (and answers) follow-up questions before answering\nthe initial question. We finally show that self-ask's structured prompting lets\nus easily plug in a search engine to answer the follow-up questions, which\nadditionally improves accuracy.", + "authors": "Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis", + "published": "2022-10-07", + "updated": "2023-10-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.10601v2", + "title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models", + "abstract": "Language models are increasingly being deployed for general problem solving\nacross a wide range of tasks, but are still confined to token-level,\nleft-to-right decision-making processes during inference. This means they can\nfall short in tasks that require exploration, strategic lookahead, or where\ninitial decisions play a pivotal role. To surmount these challenges, we\nintroduce a new framework for language model inference, Tree of Thoughts (ToT),\nwhich generalizes over the popular Chain of Thought approach to prompting\nlanguage models, and enables exploration over coherent units of text (thoughts)\nthat serve as intermediate steps toward problem solving. ToT allows LMs to\nperform deliberate decision making by considering multiple different reasoning\npaths and self-evaluating choices to decide the next course of action, as well\nas looking ahead or backtracking when necessary to make global choices. Our\nexperiments show that ToT significantly enhances language models'\nproblem-solving abilities on three novel tasks requiring non-trivial planning\nor search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in\nGame of 24, while GPT-4 with chain-of-thought prompting only solved 4% of\ntasks, our method achieved a success rate of 74%. Code repo with all prompts:\nhttps://github.com/princeton-nlp/tree-of-thought-llm.", + "authors": "Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan", + "published": "2023-05-17", + "updated": "2023-12-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.13982v1", + "title": "Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions", + "abstract": "Large language models (LLMs) are capable of answering knowledge-intensive\ncomplex questions with chain-of-thought (CoT) reasoning. However, they tend to\ngenerate factually incorrect reasoning steps when the required knowledge is not\navailable or up-to-date in models' parameters. Recent works turn to retrieving\nexternal knowledge to augment CoT reasoning. Despite being promising, these\nchain-based methods suffer from: 1) Negative retrieval. Unnecessary or\nincorrect retrieval may mislead the reasoning; 2) Limited sight. Lacking the\nability to look backward or forward, a local error in one step will propagate\nalong the chain.\n In this paper, we propose a novel approach: Probabilistic Tree-of-thought\nReasoning (ProbTree). First, LLMs translate a complex question into a query\ntree, in which each non-root node denotes a sub-question of its parent node.\nThen, probabilistic reasoning is conducted over the tree, by solving questions\nfrom leaf to root considering the confidence of both question decomposing and\nanswering. During reasoning, for leaf nodes, LLMs choose a more confident\nanswer from Closed-book QA that employs parametric knowledge and Open-book QA\nthat employs retrieved external knowledge, thus eliminating the negative\nretrieval problem. For non-leaf nodes, with the hierarchical structure, LLMs\nhave broader sights and are able to globally reason with the information from\nchild nodes, thus recovering from local errors. The experiments on three\nComplex QA datasets under the open-domain setting show that our approach\noutperforms SOTA methods significantly, demonstrating the effect of\nprobabilistic tree-of-thought reasoning.", + "authors": "Shulin Cao, Jiajie Zhang, Jiaxin Shi, Xin Lv, Zijun Yao, Qi Tian, Juanzi Li, Lei Hou", + "published": "2023-11-23", + "updated": "2023-11-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1910.07000v1", + "title": "Answering Complex Open-domain Questions Through Iterative Query Generation", + "abstract": "It is challenging for current one-step retrieve-and-read question answering\n(QA) systems to answer questions like \"Which novel by the author of 'Armada'\nwill be adapted as a feature film by Steven Spielberg?\" because the question\nseldom contains retrievable clues about the missing entity (here, the author).\nAnswering such a question requires multi-hop reasoning where one must gather\ninformation about the missing entity (or facts) to proceed with further\nreasoning. We present GoldEn (Gold Entity) Retriever, which iterates between\nreading context and retrieving more supporting documents to answer open-domain\nmulti-hop questions. Instead of using opaque and computationally expensive\nneural retrieval models, GoldEn Retriever generates natural language search\nqueries given the question and available context, and leverages off-the-shelf\ninformation retrieval systems to query for missing entities. This allows GoldEn\nRetriever to scale up efficiently for open-domain multi-hop reasoning while\nmaintaining interpretability. We evaluate GoldEn Retriever on the recently\nproposed open-domain multi-hop QA dataset, HotpotQA, and demonstrate that it\noutperforms the best previously published model despite not using pretrained\nlanguage models such as BERT.", + "authors": "Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, Christopher D. Manning", + "published": "2019-10-15", + "updated": "2019-10-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2005.11401v4", + "title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", + "abstract": "Large pre-trained language models have been shown to store factual knowledge\nin their parameters, and achieve state-of-the-art results when fine-tuned on\ndownstream NLP tasks. However, their ability to access and precisely manipulate\nknowledge is still limited, and hence on knowledge-intensive tasks, their\nperformance lags behind task-specific architectures. Additionally, providing\nprovenance for their decisions and updating their world knowledge remain open\nresearch problems. Pre-trained models with a differentiable access mechanism to\nexplicit non-parametric memory can overcome this issue, but have so far been\nonly investigated for extractive downstream tasks. We explore a general-purpose\nfine-tuning recipe for retrieval-augmented generation (RAG) -- models which\ncombine pre-trained parametric and non-parametric memory for language\ngeneration. We introduce RAG models where the parametric memory is a\npre-trained seq2seq model and the non-parametric memory is a dense vector index\nof Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG\nformulations, one which conditions on the same retrieved passages across the\nwhole generated sequence, the other can use different passages per token. We\nfine-tune and evaluate our models on a wide range of knowledge-intensive NLP\ntasks and set the state-of-the-art on three open domain QA tasks, outperforming\nparametric seq2seq models and task-specific retrieve-and-extract architectures.\nFor language generation tasks, we find that RAG models generate more specific,\ndiverse and factual language than a state-of-the-art parametric-only seq2seq\nbaseline.", + "authors": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, Douwe Kiela", + "published": "2020-05-22", + "updated": "2021-04-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.10509v2", + "title": "Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions", + "abstract": "Prompting-based large language models (LLMs) are surprisingly powerful at\ngenerating natural language reasoning steps or Chains-of-Thoughts (CoT) for\nmulti-step question answering (QA). They struggle, however, when the necessary\nknowledge is either unavailable to the LLM or not up-to-date within its\nparameters. While using the question to retrieve relevant text from an external\nknowledge source helps LLMs, we observe that this one-step retrieve-and-read\napproach is insufficient for multi-step QA. Here, \\textit{what to retrieve}\ndepends on \\textit{what has already been derived}, which in turn may depend on\n\\textit{what was previously retrieved}. To address this, we propose IRCoT, a\nnew approach for multi-step QA that interleaves retrieval with steps\n(sentences) in a CoT, guiding the retrieval with CoT and in turn using\nretrieved results to improve CoT. Using IRCoT with GPT3 substantially improves\nretrieval (up to 21 points) as well as downstream QA (up to 15 points) on four\ndatasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. We observe similar\nsubstantial gains in out-of-distribution (OOD) settings as well as with much\nsmaller models such as Flan-T5-large without additional training. IRCoT reduces\nmodel hallucination, resulting in factually more accurate CoT reasoning. Code,\ndata, and prompts are available at \\url{https://github.com/stonybrooknlp/ircot}", + "authors": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal", + "published": "2022-12-20", + "updated": "2023-06-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.00083v3", + "title": "In-Context Retrieval-Augmented Language Models", + "abstract": "Retrieval-Augmented Language Modeling (RALM) methods, which condition a\nlanguage model (LM) on relevant documents from a grounding corpus during\ngeneration, were shown to significantly improve language modeling performance.\nIn addition, they can mitigate the problem of factually inaccurate text\ngeneration and provide natural source attribution mechanism. Existing RALM\napproaches focus on modifying the LM architecture in order to facilitate the\nincorporation of external information, significantly complicating deployment.\nThis paper considers a simple alternative, which we dub In-Context RALM:\nleaving the LM architecture unchanged and prepending grounding documents to the\ninput, without any further training of the LM. We show that In-Context RALM\nthat builds on off-the-shelf general purpose retrievers provides surprisingly\nlarge LM gains across model sizes and diverse corpora. We also demonstrate that\nthe document retrieval and ranking mechanism can be specialized to the RALM\nsetting to further boost performance. We conclude that In-Context RALM has\nconsiderable potential to increase the prevalence of LM grounding, particularly\nin settings where a pretrained LM must be used without modification or even via\nAPI access.", + "authors": "Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham", + "published": "2023-01-31", + "updated": "2023-08-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.15056v1", + "title": "Reasoning over Hierarchical Question Decomposition Tree for Explainable Question Answering", + "abstract": "Explainable question answering (XQA) aims to answer a given question and\nprovide an explanation why the answer is selected. Existing XQA methods focus\non reasoning on a single knowledge source, e.g., structured knowledge bases,\nunstructured corpora, etc. However, integrating information from heterogeneous\nknowledge sources is essential to answer complex questions. In this paper, we\npropose to leverage question decomposing for heterogeneous knowledge\nintegration, by breaking down a complex question into simpler ones, and\nselecting the appropriate knowledge source for each sub-question. To facilitate\nreasoning, we propose a novel two-stage XQA framework, Reasoning over\nHierarchical Question Decomposition Tree (RoHT). First, we build the\nHierarchical Question Decomposition Tree (HQDT) to understand the semantics of\na complex question; then, we conduct probabilistic reasoning over HQDT from\nroot to leaves recursively, to aggregate heterogeneous knowledge at different\ntree levels and search for a best solution considering the decomposing and\nanswering probabilities. The experiments on complex QA datasets KQA Pro and\nMusique show that our framework outperforms SOTA methods significantly,\ndemonstrating the effectiveness of leveraging question decomposing for\nknowledge integration and our RoHT framework.", + "authors": "Jiajie Zhang, Shulin Cao, Tingjia Zhang, Xin Lv, Jiaxin Shi, Qi Tian, Juanzi Li, Lei Hou", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.10470v2", + "title": "Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering", + "abstract": "Answering questions that require multi-hop reasoning at web-scale\nnecessitates retrieving multiple evidence documents, one of which often has\nlittle lexical or semantic relationship to the question. This paper introduces\na new graph-based recurrent retrieval approach that learns to retrieve\nreasoning paths over the Wikipedia graph to answer multi-hop open-domain\nquestions. Our retriever model trains a recurrent neural network that learns to\nsequentially retrieve evidence paragraphs in the reasoning path by conditioning\non the previously retrieved documents. Our reader model ranks the reasoning\npaths and extracts the answer span included in the best reasoning path.\nExperimental results show state-of-the-art results in three open-domain QA\ndatasets, showcasing the effectiveness and robustness of our method. Notably,\nour method achieves significant improvement in HotpotQA, outperforming the\nprevious best model by more than 14 points.", + "authors": "Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, Caiming Xiong", + "published": "2019-11-24", + "updated": "2020-02-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.05733v1", + "title": "Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering", + "abstract": "This paper introduces a new framework for open-domain question answering in\nwhich the retriever and the reader iteratively interact with each other. The\nframework is agnostic to the architecture of the machine reading model, only\nrequiring access to the token-level hidden representations of the reader. The\nretriever uses fast nearest neighbor search to scale to corpora containing\nmillions of paragraphs. A gated recurrent unit updates the query at each step\nconditioned on the state of the reader and the reformulated query is used to\nre-rank the paragraphs by the retriever. We conduct analysis and show that\niterative interaction helps in retrieving informative paragraphs from the\ncorpus. Finally, we show that our multi-step-reasoning framework brings\nconsistent improvement when applied to two widely used reader architectures\nDrQA and BiDAF on various large open-domain datasets --- TriviaQA-unfiltered,\nQuasarT, SearchQA, and SQuAD-Open.", + "authors": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Andrew McCallum", + "published": "2019-05-14", + "updated": "2019-05-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2002.08909v1", + "title": "REALM: Retrieval-Augmented Language Model Pre-Training", + "abstract": "Language model pre-training has been shown to capture a surprising amount of\nworld knowledge, crucial for NLP tasks such as question answering. However,\nthis knowledge is stored implicitly in the parameters of a neural network,\nrequiring ever-larger networks to cover more facts.\n To capture knowledge in a more modular and interpretable way, we augment\nlanguage model pre-training with a latent knowledge retriever, which allows the\nmodel to retrieve and attend over documents from a large corpus such as\nWikipedia, used during pre-training, fine-tuning and inference. For the first\ntime, we show how to pre-train such a knowledge retriever in an unsupervised\nmanner, using masked language modeling as the learning signal and\nbackpropagating through a retrieval step that considers millions of documents.\n We demonstrate the effectiveness of Retrieval-Augmented Language Model\npre-training (REALM) by fine-tuning on the challenging task of Open-domain\nQuestion Answering (Open-QA). We compare against state-of-the-art models for\nboth explicit and implicit knowledge storage on three popular Open-QA\nbenchmarks, and find that we outperform all previous methods by a significant\nmargin (4-16% absolute accuracy), while also providing qualitative benefits\nsuch as interpretability and modularity.", + "authors": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang", + "published": "2020-02-10", + "updated": "2020-02-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.15294v2", + "title": "Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy", + "abstract": "Large language models are powerful text processors and reasoners, but are\nstill subject to limitations including outdated knowledge and hallucinations,\nwhich necessitates connecting them to the world. Retrieval-augmented large\nlanguage models have raised extensive attention for grounding model generation\non external knowledge. However, retrievers struggle to capture relevance,\nespecially for queries with complex information needs. Recent work has proposed\nto improve relevance modeling by having large language models actively involved\nin retrieval, i.e., to improve retrieval with generation. In this paper, we\nshow that strong performance can be achieved by a method we call Iter-RetGen,\nwhich synergizes retrieval and generation in an iterative manner. A model\noutput shows what might be needed to finish a task, and thus provides an\ninformative context for retrieving more relevant knowledge which in turn helps\ngenerate a better output in the next iteration. Compared with recent work which\ninterleaves retrieval with generation when producing an output, Iter-RetGen\nprocesses all retrieved knowledge as a whole and largely preserves the\nflexibility in generation without structural constraints. We evaluate\nIter-RetGen on multi-hop question answering, fact verification, and commonsense\nreasoning, and show that it can flexibly leverage parametric knowledge and\nnon-parametric knowledge, and is superior to or competitive with\nstate-of-the-art retrieval-augmented baselines while causing fewer overheads of\nretrieval and generation. We can further improve performance via\ngeneration-augmented retrieval adaptation.", + "authors": "Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen", + "published": "2023-05-24", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.09332v3", + "title": "WebGPT: Browser-assisted question-answering with human feedback", + "abstract": "We fine-tune GPT-3 to answer long-form questions using a text-based\nweb-browsing environment, which allows the model to search and navigate the\nweb. By setting up the task so that it can be performed by humans, we are able\nto train models on the task using imitation learning, and then optimize answer\nquality with human feedback. To make human evaluation of factual accuracy\neasier, models must collect references while browsing in support of their\nanswers. We train and evaluate our models on ELI5, a dataset of questions asked\nby Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior\ncloning, and then performing rejection sampling against a reward model trained\nto predict human preferences. This model's answers are preferred by humans 56%\nof the time to those of our human demonstrators, and 69% of the time to the\nhighest-voted answer from Reddit.", + "authors": "Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, John Schulman", + "published": "2021-12-17", + "updated": "2022-06-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.13752v1", + "title": "Towards General Conceptual Model Editing via Adversarial Representation Engineering", + "abstract": "Recent research has introduced Representation Engineering (RepE) as a\npromising approach for understanding complex inner workings of large-scale\nmodels like Large Language Models (LLMs). However, finding practical and\nefficient methods to apply these representations for general and flexible model\nediting remains an open problem. Inspired by the Generative Adversarial Network\n(GAN) framework, we introduce a novel approach called Adversarial\nRepresentation Engineering (ARE). This method leverages RepE by using a\nrepresentation sensor to guide the editing of LLMs, offering a unified and\ninterpretable framework for conceptual model editing without degrading baseline\nperformance. Our experiments on multiple conceptual editing confirm ARE's\neffectiveness. Code and data are available at\nhttps://github.com/Zhang-Yihao/Adversarial-Representation-Engineering.", + "authors": "Yihao Zhang, Zeming Wei, Jun Sun, Meng Sun", + "published": "2024-04-21", + "updated": "2024-04-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CR", + "math.OC" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Representation Engineering. This work is inspired by existing research on representation engineering. Numerous efforts have been put into extracting and visualizing the internal representations [45, 1] 2 and concepts [33, 61] from AI models to improve their transparency and interpretability. In particular, the significant capability of LLMs has sparked great research interest in understanding their internal mechanisms [66, 46]. Representation engineering (RepE) [71], which seeks understanding and controlling representations of high-level cognition in LLMs, finds that there exist low-rank representations that can steer and control specific model capacity. Similar observations are also made in some specific scenarios, e.g. harmfulness [53, 68] and trustfulness [4]. However, RepE did not provide a practical solution to edit the model for practical goals. Adversarial Learning. This work adopts adversarial learning to improve the robustness of representation learning. In response to the safety threats from adversarial examples [49, 13, 8], Adversarial training methods [32, 6, 42, 54] was first designed for defending deep neural networks (DNNs) against such adversarial attacks. By optimizing the min-max optimization objective with worst-case performance against adversarial examples, adversarial training successfully improves the robustness of DNNs and has proven to be one of the most effective technique [7, 3, 52] among various adversarial defense methods [15, 36, 63, 9]. In addition to the adversarial scenario, adversarial training also has the benefit of making the representation and prediction more reliable [14, 2] and interpretable [41, 47], thus also motivated other learning paradigms like domain generalization [12, 48] and contrastive learning [22, 67] for more robust representation. Our proposed adversarial representation engineering leverages the adversarial learning paradigm to make the oracle representation discriminator more robust and reliable. Parameter-Efficient Fine-tuning. This work is related to parameter-efficient fine-tuning. Given the extremely large number of parameters in LLMs, parameter-efficient fine-tuning methods (PEFTs) are designed for tuning the LLM to be adapted to specific tasks with admissible computational overheads. Existing PEFTs can be mainly categorized as 1) module-based, which trains an extra small module in the model, like low-rank adaption (LoRA) [18, 30] and Adapters [17, 38]; and 2) prompt-based which optimizes a prompt or embedding for the task [43, 25]. While most PEFTs are designed for specific tasks, how to efficiently edit the model knowledge and style remains underexplored [34].", + "pre_questions": [], + "main_content": "Introduction While Large Language Models (LLMs) have achieved remarkable success in a variety of tasks [37], their complex internal mechanism makes interpreting and censoring their behaviors (e.g., for safety alignment or hallucination reduction) challenging. To improve the interpretability and consequently the safety of LLMs, numerous efforts have been dedicated to interpreting the internal mechanisms from various perspectives like feature attribution [11, 44], neuron analysis [16, 35], and self-explanation [20]. Recently, Zou et al. proposed the idea of Representation Engineering (RepE) [71], which offers a way of understanding how LLMs work internally by focusing on the overall feature representations rather than individual neurons. Specifically, RepE extracts and analyzes the intermediate features of different concepts (e.g., honesty, fairness, and harmlessness), enabling the monitoring of the internal behaviors of LLMs. More relevantly, RepE potentially allows editing and controlling the behaviors of LLMs by directly intervening in the internal hidden layers during inference. However, as RepE was essentially proposed to monitor the behaviors of LLMs, their proposed method for editing the model through representation vector incorporation is rather limited for practical uses. For instance, their method could disrupt the underlying structure of general LLMs, potentially hindering the model\u2019s performance. Additionally, their representation vector used for model editing may not be accurate or robust, due to problems such as overfitting and adversarial samples. Furthermore, RepE might be heavily reliant on carefully chosen hyperparameters, and the naturalness and diversity of the LLM\u2019s outputs could be negatively affected. To address these shortcomings, we investigate ways to efficiently fine-tune the model using the representations provided by RepE to achieve specific editing goals. Specifically, we attempt to train an oracle discriminator with the extracted representations given a particular goal of editing (e.g., harmlessness and trustfulness), then investigate how to use the \u2217Corresponding Authors: Jun Sun (junsun@smu.edu.sg) and Meng Sun (sunm@pku.edu.cn). Preprint. Under review. arXiv:2404.13752v1 [cs.LG] 21 Apr 2024 Figure 1: An illustration of ARE framework. This example showcases how ARE can enhance the concept of \"angry\" within an LLM. The process involves an iterative dance between the generator and the discriminator. The generator produces outputs, while the discriminator refines its internal representation of \"angry\" based on these outputs. Through this back-and-forth training, the LLM gradually learns to produce outputs that align better with the concept of \"angry.\" discriminator to efficiently learn robust representations and subsequently edit the model accordingly. However, we found that the trained discriminator may (expectedly) fit non-robust features [21] and not be reliable for fine-tuning the models. Therefore, inspired by the adversarial learning networks [13], we extend our idea to conduct adversarial training between the generative model and the discriminator to improve the reliability of the oracle model. Motivated by these studies, we propose an Adversarial Representation Engineering (ARE) framework, utilizing the internal representations and adversarial learning from the generative model and the discriminator, as illustrated in Figure 1. ARE efficiently and effectively edits LLMs by leveraging representation engineering techniques. In each epoch, it performs two key steps. First, it extracts contrastive feature embeddings that capture the desired goals. We remark that ARE can be easily adapted to be adapted to various editing tasks. Secondly, it simultaneously trains both the LLM and the discriminator model. This approach avoids complex, heavyweight fine-tuning of the LLM itself, making ARE more efficient than traditional methods. We conduct extensive experiments to evaluate the effectiveness of ARE on various editing and censoring tasks, including editing the model to be better aligned or to remove existing alignment, or to always generate honest responses or dishonest ones. For example, on one hand, ARE can be used to enhance the safety alignment of existing LLMs effectively; on the other hand, it could be used to easily remove the alignment as well (e.g., for red-teaming). Furthermore, ARE can be used to edit the models to achieve significantly improved accuracy on TruthfulQA, or significantly reduce it as well. Additionally, we fine-tuned the model to achieve state-of-the-art TruthfulQA [28] accuracy. These results present strong evidence of the practicalities of ARE in terms of editing and censoring LLMs. In this work, we focus primarily on LLMs, specifically decoder-only architectures, unless stated otherwise. These models are denoted as M(\u03b8), where \u03b8 represents model parameters. The model M is structured into several layers, collectively represented by the set L \u2286N, where each element l \u2208L corresponds to the l-th decoder layer. The model M processes an input x, which is a prompt, to generate outputs as hidden states. Let S denote the set of all valid input/output sentences. These hidden states are denoted by Hx(\u00b7), where Hx(l) specifically refers to the hidden state from the l-th layer when model processes input x. Notably, each hidden state H(l) is a vector in the n-dimensional real space Rn. This architecture forms the basis for our analysis and further discussions in the paper. Moreover, the decoder model can generate responses, and we define the response generator function as M\u03b8(\u00b7) : S \u2192S. Next, we define a concept c semantically. A concept specifically applies to sentences, or more \u00b7 \u2192 Next, we define a concept c semantically. A concept specifically applies to sentences, or more precisely, to responses generated by the model M. To operationalize this, we introduce a function Jc(\u00b7) : S \u2192{0, 1} for each concept c. This function determines whether an input s \u2208S embodies the concept c, as judged by a human expert. For example, for the concept \"angry\", the judgment function evaluates if a response is expressed in an angry manner; it outputs 1 if the response is furious, and 0 otherwise. For every concept c+, there is a corresponding negation c\u2212, where the judgment function for c\u2212is defined as the negation of that for c+: Jc\u2212(s) = 1 \u2212Jc+(s) for all s \u2208S. We are now ready to define the task of conceptual model editing. Assuming that the input prompts follow some implicit distribution D defined on the space S, the task of conceptual editing, aimed at enhancing the concept c+, is to fine-tune the model such that the response r satisfies Jc+(r) = 1 for most inputs. This task is formally defined as arg max \u03b8 Ex\u223cD Jc+(M\u03b8(x)). Notably, reducing the target concept c+ is equivalent to enhancing the concept c\u2212. However, practically speaking, it is infeasible to edit in this manner due to the inability to access the true 3 distribution D or to implement the judgment function Jc accurately. Therefore, a practical approach is to use a curated set of prompts to approximate these abstract concepts. To emulate the distribution, we use a small, diverse collection of prompts designed to cover a wide range of scenarios while avoiding an excessive representation of the target concept c+, or alternatively, representing the negative concept c\u2212. This set of prompts is referred to as antithetical inputs, denoted IA. Accordingly, our training objective becomes arg max \u03b8 X x\u2208IA Jc+(M\u03b8(x)). To effectively demonstrate the target concept c+, we gather a set of prompts known as target inputs IT , which ideally trigger responses consistently exhibiting the target concept, such that \u2200x \u2208IT , Jc+(x) = 1. While exhibiting the target concept perfectly may not be feasible, the performance is expected to fulfill the following condition: X x\u2208IA Jc+(M\u03b8(x))/|IA| < X x\u2208IT Jc+(M\u03b8(x))/|IT |. For example, consider the target concept \"angry\" that we wish to attain. To construct the antithetical inputs, we would gather a set of neutral prompts. Subsequently, to obtain the target inputs, we append the postfix \"respond in an angry manner.\" to each prompt. This modification aims to reliably trigger responses that exhibit \"anger\", thereby constituting an effective set of target inputs. As we utilize the target input set IT to illustrate the target concepts, the practical objective of fine-tuning shifts towards aligning the responses generated from IA as closely as possible with those from IT . However, achieving token-level similarity is complex and overly fine-grained. Therefore, we employ a high-level approach known as representation engineering [71], which involves manipulating the representations\u2014outcomes of an embedding function that maps the internal neural activities of a decoder model into the n-dimensional real space Rn. According to Zou et al. [71], these representations adeptly capture and reflect high-level conceptual attributes of the model in a structured, top-down manner. For any given concept c, it is possible to embed this concept within the representation space Rn as a distinct feature set apart within this Euclidean space. As illustrated in Figure 3a, embeddings of opposite concepts are often distinctly separated, facilitating straightforward classification using simple classifiers. The process of extracting these representations involves selecting tensors from the hidden states produced by processing an input x across specified layers Le \u2282L. This process can be formally described by the mapping function R : [Hx(l)]l\u2208Le \u2192Rn, which transforms input space S to representation space Rn. A practical method for implementing this is to concatenate the hidden states from some selected layers. By using these high-level representations, specifically target representations RT = {R(x)|x \u2208 IT } and antithetical representations RA = {R(x)|x \u2208IA}, we redefine our optimization goal. Representation serves as a proxy for the concept\u2019s embedded features, enabling the definition of a similarity function LM(\u03b8)(\u00b7, \u00b7) that quantifies the differences between these two sets of representations. The training objective is therefore established as arg min \u03b8 LM(\u03b8)(RT , RA). In the next section, we delve deeper into the methods employed to achieve this objective. In particular, we show that the \"loss function\" L effectively functions as a discriminator. 4 Proposed Method Contrary to the approach suggested in RepE [71], which focuses on generating a target representation vector that is simplistic and potentially overfitted, we introduce a representation discriminator designed to discern among representations so that we are able to learn a robust representation which allows effective model editing. This discriminator, embodied by a neural network, implicitly signifies the representation. By alternately updating this discriminator and the original model, we facilitate the development of a more refined and robust representation discriminator. This process forms the core of Adversarial Representation Engineering (ARE). 4 4.1 Adversarial Representation Engineering Figure 2: Comparison between the basic structures of GAN and ARE. (a) Initial Representation Clustering, Epoch 0. (b) Intermediate Alignment Adjustment, Epoch 1. (c) Final Representation Convergence, Epoch 30. Figure 3: t-SNE Visualization of Aligned Model\u2019s Response to Normal and Malicious Prompts Over Iterative Training Epochs. Inspired by Generative Adversarial Networks (GANs) [13], ARE employs a dual-model design. In this setup, a representation discriminator (akin to GAN\u2019s discriminator) assesses the generated representations, guiding the original LLM (similar to GAN\u2019s generator) to achieve the target concept. We show this duality in Figure 2. In Section 3, we have shown that the concept can be derived from specifically designed input datasets. Note that the goal of editing is to minimize the gap between the representations from the two datasets as RT = {R(M, x)|x \u2208XT } and RA = {R(M, x)|x \u2208XA}. Expressing the difference in features between the two sets above concisely through a single tensor or numerical metrics can be challenging. Therefore, we propose to encode this feature into a classifier in the form of simple neural network models. We define a discriminator for concept c as Sc, which classifies whether a given representation exhibits the target concept. It accepts a representation vector and returns the confidence that it exhibits the concept. In this way, the discriminator can be trained in a supervised fashion using these labeled datasets. However, a discriminator trained on such (limited) datasets may not accurately capture the desired representation\u2019s feature due to the presence of numerous samples near the decision boundary and adversarial samples. For generalized conceptual editing, we aim to obtain (through the decoder model) a generalized and robust target presentation that works for all inputs. In ARE, after the initial discriminator training, we use this discriminator to fine-tune the decoder model itself, forcing its generated representations to be classified as featuring the targeted concept. Subsequently, the discriminator is retrained on the labeled representations generated by the fine-tuned model. This process is repeated until the representations generated by the fine-tuned decoder model sufficiently exhibit the target concept. The core idea is to allow the decoder model and the discriminator to be adversarial to each other, similar to the approach employed in GAN. The overall editing algorithm is presented in Algorithm 1. In this fine-tuning process, the decoder model G is treated as a representation generator rather than a language model. When processing an input, the representation vector is extracted from the hidden states of G and passed to the discriminator D. Leveraging the Low-Rank Adaptation (LoRA) [18] technique, we edit some selected layers of 5 the generator G to maximize the probability of generating representations classified as the target class by the discriminator D, while keeping the parameters of D frozen. Notably, the gradient can propagate through the entire framework by combining the generator and discriminator into a single model. To provide a clear understanding of the alternative training process, we offer a visualization Algorithm 1: Conceptual Model Editing with ARE Input: A representation generator decoder model G that gets a string and returns corresponding representation, representation discriminator D(\u03b4) with parameter \u03b4, targeted inputs IT = {I(1) T , \u00b7 \u00b7 \u00b7 , I(n) T }, antithetical inputs IA = {I(1) A , \u00b7 \u00b7 \u00b7 , I(n) A }, layers to edit Le, layers to gather representations from Lr, optimizing epochs T, target concept label yTarget, learning rate for discriminator lD Output: Fine-tuned LLM GT 1 class CombinedModel: 2 Generator G; 3 Discriminator D; 4 Forward Propagation Method M(\u00b7); 5 M \u2190CombinedModel(G, D(\u03b4Init)); \u25b7M(\u00b7) is defined as M(x) = D(G(x)) 6 Mlora(\u03b8) \u2190LoadLoRAAdapter(M, Le); \u25b7Only to edit LoRA parameters \u03b8 in layers l \u2208Le 7 for t : 1 \u2192T do 8 RT , RA \u2190[]; \u25b7Initialize Representation Dataset 9 for i : 1 \u2192n do 10 RT .append(Mlora.Generator(I(i) T )); 11 RA.append(Mlora.Generator(I(i) A )); 12 end 13 R \u2190RT \u222aRA; 14 update \u03b4 by minimizing L(\u03b4) = \u2207\u03b4 1 |R| P r\u2208R \u2212(log 1r\u2208RT (r)PD(\u03b4)[yT |r] + log 1r\u2208RA(r)PD(\u03b4)[yA|r]); 15 M.Discriminator \u2190D(\u03b4); \u25b7Train the discriminator on labeled RT \u222aRA 16 I \u2190IT \u222aIA; 17 update \u03b8 by minimizing L(\u03b8) = \u2207\u03b8 1 |R| P x\u2208I \u2212log PMlora(\u03b8)[yTarget|x]; 18 Mlora \u2190Mlora(\u03b8); \u25b7Fine-tune G by LoRA to ensure it generates targeted representation 19 end 20 return Mlora.Generator; in Figure 3. We compiled a set of 256 prompts, evenly divided between normal and malicious, with the expectation that the aligned model will reject all malicious inputs. The representations derived from these prompts are plotted using t-SNE, as depicted in the figure. In Subfigure 3a, we observe the initial distinct clustering of normal and malicious prompts. Our goal for model editing is to adjust these representations so that the two types of prompts yield similar responses. During the first epoch, illustrated in Subfigure 3b, the malicious prompts begin to converge towards the cluster of normal prompts. Since the two categories of representations remain distinct, necessitating further refinement of the discriminator. After 30 epochs of iterative training as shown in Subfigure 3c, we observe that the representation of normal prompts remains consistent, having been continuously classified correctly. Meanwhile, the representations of malicious prompts have nearly merged into the normal cluster, making it challenging for the classifier to distinguish them. At this stage, the differences in representations are minimal and can be considered negligible, indicating a successful editing process. 4.2 General Conceptual Editing In the following, we present details of the editing algorithm in ARE. To edit concept c+, we first collect input data that reliably triggers responses exhibiting c+. Similarly, to train a discriminator for the opposite concept c\u2212, we collect corresponding triggering input data. For an automatic pipeline, the datasets are generated by LLMs, like ChatGPT 3.5, using the prompt: Generate N sentences that one might respond to in a manner. Approximately 102 input prompts 6 per dataset track suffice. During training, we minimize the overall cross-entropy loss of D(G(p)), where p is an input from any category. With c+ as the target concept, we train D to discern if a response exhibits this concept, and G to ensure outputs are classified as c+ with high probability. This entails a two-step optimization: Step 1. Train D(\u03b4) to fit G by optimizing L(\u03b4): Consider generated target representations RT corresponding to c+ and antithetical representations corresponding to c\u2212. The loss L(\u03b4) is defined as the classic cross-entropy loss, which is L(\u03b4) = 1 |RT \u222aRA|( X r\u2208RT \u2212log P\u03b4[D\u03b4(r) = c+] + X r\u2208RA \u2212log P\u03b4[D\u03b4(r) = c\u2212]); Step 2. Train G(\u03b8) to fit D by optimizing L(\u03b8): Consider all input prompts in set I. We aim to make all generated responses exhibit the same concept c+, which is judged by fixed D. Thus the loss L(\u03b4) is defined as the cross-entropy loss for the probability of classifying a prompt to c+, which is L(\u03b8) = 1 |I|( X x\u2208I \u2212log P\u03b8[D(G\u03b8(x)) = c+]). Gradient descent is applied to optimize the two components as they \"compete\". Iteratively, the discriminator increasingly discerns how the hidden states exhibit the concept through training, while the generator\u2019s outputs increasingly capture the targeted representations. Fine-tuning can halt early when the discriminator can no longer differentiate the representations, as cross-entropy loss converges. 5 Experiments To evaluate the effectiveness and flexibility of ARE, we apply it to two distinct conceptual editing tasks: jailbreak and its defense, and control of hallucinogenic text generation. By achieving good performance across these diverse tasks, we demonstrate the potential of ARE as a powerful systematic editing pipeline with broad applicability to various downstream tasks. 5.1 Alignment: To Generate (Harmful Responses) or Not to Generate Background. With the application of various alignment techniques [27, 5], LLMs can often generate responses congruent with human values. This mitigates the generation of detrimental content and effectively addresses potentially harmful queries. However, recent research also highlights the vulnerability of aligned LLMs to adversarial attacks, particularly \"jailbreaking\". These techniques successfully exploit vulnerabilities within aligned models, prompting them to generate harmful outputs. Recognizing the need for combating such attacks (i.e., blue team) and for evaluating the risk brought by model editing techniques, we evaluate the potential of applying ARE for editing the concept of \"alignment\", i.e., to either enhancing alignment (i.e., defense) or removing it (e.g., attack). Experiment Setup. We evaluate our methods using three open-source, aligned large language models: Llama-2-7B-Chat [51], Vicuna-7B [69], and Guanaco-7B [10], for both attack and defense tasks. Our discriminator is a 2-layer neural network with a hidden layer consisting of 512 neurons. Following the causal study in [71], we extract runtime representations at the last token on the hidden states in the (-18, -23) layers of the models. These models are fine-tuned on a dataset provided in RepE [39], which contains a set of prompts that are labeled either normal or malicious. For fine-tuning, we employ the Parameter-Efficient Fine-Tuning (PEFT) framework [59], using the Adam optimizer [23] with a learning rate of 1 \u00d7 10\u22124. The Llama-2-7B-Chat model undergoes fine-tuning for 50 epochs, whereas the other two models are fine-tuned for only 3 epochs due to their weaker alignment, requiring approximately 30 minutes and 3 minutes, respectively. Evaluation Metrics. Following [19, 31], we employ the Advbench dataset proposed by Zou et al. [72] to test our results. This dataset contains a range of malicious prompts that may trigger harmful responses. Through the application of ARE, we aim to bolster model safety by enhancing alignment (as seen from the blue team\u2019s perspective) and to explore reducing the alignment to elicit harmful responses (as seen from the red team\u2019s perspective). Our effectiveness is measured using the refusal rate on malicious prompts. 7 For the red team\u2019s objective, we endeavor to induce the aligned language model to emit harmful responses. Conversely, our aim is to ensure the model rejects answering malicious prompts, all the while preserving its utility on normal prompts. To measure the effectiveness of ARE, our methodology is compared against three distinct categories of attack techniques, each designed to guide the language model toward generating harmful responses. Specifically, we focus on one-shot In Context Attack [55] and Deep Inception [26] as exemplary state-of-the-art attack methods based on templates. These methods are recognized for their efficacy, requiring similar time investments. Additionally, GCG [72] and AutoDAN [31] are selected as leading white-box attack methods based on optimization. Both of these methods may demand 102 to 103 times more time to execute compared to others. Moreover, we consider the harmful controlling baseline, Contrast Vector from RepE, and Shadow Alignment [60] as two model editing-based attack methods, positioning them in the same category as our proposed approach. We also take a fine-tuning approach, harmful examples demonstration attack (HEDA) [40], as a baseline for fine-tuning based attack. For the aspect of model defense, Self-Reminder [57] and In-Context Defense [55] are adopted as baseline defense strategies. Experimental Results. Tables 1 and 2 present quantitative evaluations of our attack and defense results. The analysis of attack outcomes reveals that template-based strategies or those optimizing a prefix to disrupt model alignment are not sufficiently effective, as indicated by low attack success rates, rendering them unsuitable for reliable red-team operations. Conversely, our method, which employs editing-based attacks, demonstrates superior performance over all other listed editing approaches, achieving near-perfect success rates (close to 100%) against the most secure models that are white-box and open-source. These results imply that we might need additional countermeasures to protect open-source LLMs from such model editing. Furthermore, our analysis of various defense mechanisms against optimization-based attack strategies clearly demonstrates that our modified model substantially improves the alignment and safety of the targeted models. This improvement is attributed to our method\u2019s superior performance compared to conventional defense strategies. Vicuna-7B, in particular, shows significant enhancement due to its limited initial alignment. Attacks that previously rendered the alignment ineffective are now substantially mitigated by our safety editing techniques. For instance, the attack success rates have markedly decreased to 41.1% for AutoDAN and 28.8% for GCG attacks. This result suggests that model editing may be considered an effective alternative or complementary approach to standard alignment methods such as RLHF. Table 1: Evaluation of the effectiveness of ARE editing for attacking, where the number of the refusal rates of various LLMs under diverse attack methods. Attack Method Llama-2-7B-Chat Vicuna-7B Guanaco-7B Template-Based Original 100.0 95.0 89.9 ICA [55] 94.6 35.3 29.8 DeepInception [26] 99.3 58.5 54.3 Optimization-Based GCG [72] 51.3 3.5 1.9 AutoDAN [31] 70.0 3.2 2.1 Editing-Based Contrast Vector [71] 5.9 1.1 0.9 Shadow Alignment [60] 23.5 8.9 7.0 HEDA [40] 20.0 4.6 2.9 ARE 0.5 0.0 0.0 5.2 Hallucination: To Hallucinate or Not to Hallucinate Background. Extensive prior research, as documented by Zhang et al. [65], has demonstrated that LLMs are prone to producing inaccurate or entirely fabricated information, a phenomenon often termed as \"hallucinations\". In an effort to evaluate the flexibility of ARE for various model editing tasks, we undertook experiments specifically targeting the control of Hallucination instances. Our objective is twofold: firstly, to amend language models to enhance the accuracy and truthfulness of their outputs, thereby mitigating hallucinations; and secondly, for red team purposes, to intentionally induce hallucinations. We remark that the latter is useful since there exists a requirement to generate 8 Table 2: Evaluating the Effectiveness of ARE Defense Method Through Comparative Analysis on Refusal Rate of Different Jailbreak Attack Methods on Aligned Large Models Fine-Tuned with Our ARE Defense Approach. Higher Refusal Rate indicates Better Defense Effectiveness. Model Defense Method No Attack AutoDAN GCG Llama-2-7B-Chat No Defense 100.0 70.0 51.3 Self-Reminder [57] 100.0 85.6 60.2 ICD [55] 100.0 92.3 74.0 ARE 100.0 99.0 89.7 Vicuna-7B No Defense 95.0 3.2 3.5 Self-Reminder 99.3 12.3 12.9 ICD 99.7 4.0 18.4 ARE 100.0 44.3 32.3 hallucinatory responses as a form of deterrent, aiming to discourage the occurrence of hallucinations in future instances [64]. Experiment Setup. In line with experimental protocols established by prior research [71], we curated target and antithetical prompts from the Alpaca Instruction Tuning Dataset [50], a dataset contains various instructions that cover multiple fields, hence mitigating task-specific biases. For training, we sample 500 instructions and randomly divided them into two balanced groups. We choose one group of prompts as the antithetical dataset without any perturbation. For the hallucination enhancing task, the target subset received the prefix \"{Instruction} Please give an untruthful answer.\", inducing the model to produce hallucinated responses; similarly, for the hallucination-reducing task, the target dataset was prompted with \"{Instruction} Please give a truthful answer.\", guiding the model towards accurate and reliable outputs. The objective of this editing exercise is to modulate the model\u2019s propensity for generating either hallucinated or veracious responses. Therefore, the training regimen is inherently bidirectional, steering the model\u2019s representational outputs toward either the hallucinated or the truthful extremities. To demonstrate the versatility of our method without the need for task-specific hyperparameters and settings, we employed uniform settings as delineated in the Jailbreak tasks described in Section 5.1, with the sole variable being the dataset employed. Evaluation Metric. Building upon previous studies [64, 24], we utilized the TrustfulQA benchmark [28] for evaluating the tendency of models to produce hallucinations. This benchmark comprises 817 questions across 38 subcategories, each designed to potentially lead models towards incorrect beliefs, misconceptions, or biased responses. In its multiple-choice format, TrustfulQA provides an average of around 5 options per question, among which only one answer is factual, while the others include hallucinated content. Models are thus challenged to identify the singular truthful response. Additionally, given that many questions feature biased options or false facts, there is a provision for models to abstain from answering altogether. To exploit the advanced capabilities of language models like LLaMA-2, which excel in generating responses based on instructions, we diverged from conventional methodologies designed for generative models, which typically rely on calculating the log probabilities for each answer\u2014a process that may lack precision and stray from practical applications. Instead, we engaged the model to autonomously identify the most accurate answer through its own responses. This approach evaluates the model\u2019s proficiency in distinguishing between factual content and hallucinations, mirroring real-world situations where individuals derive information from responses rather than underlying probabilistic data. This metric has gained traction in contemporary benchmarks, reflecting its relevance and applicability in assessing model performance in scenarios akin to human information processing [62]. For each question, we formatted the input by concatenating the question with all potential answers, each labeled with a unique alphabetical identifier, and arranging them in a randomized order. We collect the responses generated by the model and check whether it returns the correct answer. A model\u2019s propensity of hallucination is measured using Correct Answer Rate (% Right Answer), defined as # Right Answer/ # Answer, which assesses a model\u2019s capability to identify the truthful response. Experiment Results. We implemented bidirectional editing and benchmarked our approach against recent strategies aimed at mitigating hallucinations, such as Self Reminder [57] and Inference-Time 9 Table 3: Evaluation of the effectiveness of ARE editing for encouraging hallucination, where % Right Answer highlighted in red denotes for highest hallucinaton rate. Control Method Random Baseline Self-Reminder ARE Llama2-7B 22.60 30.35 25.95 11.75 Mistral-7B 22.60 40.51 40.26 22.03 Table 4: Evaluation of the effectiveness of ARE editing for discouraging hallucination, where % Right Answer highlighted in blue denotes for lowest hallucination rate. Control Method Baseline Self-Reminder ITI ARE Llama2-7B 30.35 34.27 36.84 52.14 Mistral-7B 40.51 46.02 45.17 52.14 Intervention (ITI) [24]. The self-reminder approach is implemented by prompting the inputs with prefix Please give a/an truthful/untruthful answer. The outcomes of these comparisons, based on the previously mentioned metrics, are detailed in Table 3 and Table 4. The efficacy of our model is evident, as our hallucination-enhancing editing led to a minimal number of correct responses; conversely, the hallucination-reduction editing significantly surpassed other evaluated approaches in both metrics. This demonstrates that our editing technique effectively addresses hallucinations without diminishing the model\u2019s ability to provide valid responses. It is noteworthy that the model, after undergoing the editing process, exhibits improved performance relative to the target input set, thus demonstrating the efficacy of our method. This enhancement enables the post-editing model to achieve superior performance on tasks that were previously unattainable. This outcome not only validates the effectiveness of our conceptual editing approach but also underscores the potential of representation engineering in refining the capabilities of language models. 5.3 Text Generation Quality Issues Background. While the last two sections focus on evaluating how successful the editing is in terms of achieving the target concept, it is essential to assess the naturalness of the generated texts for effective testing. Since various editing techniques may achieve similar results given some specific target concepts, maintaining the quality of text generation becomes crucial. For general conceptual editing, any decline in text generation quality can substantially reduce the downstream utility of the edited model, thereby hindering its practical applicability. Current editing approaches, which focus on predefined and single target edits, typically produce models plagued by problems such as repetition and notably reduced diversity in the generated texts. These issues are highlighted in Table 5. For example, the outputs may lack variability, frequently recycling similar structures; within individual responses, sentences might be very similar or entirely repetitive. We conjecture that this phenomenon originates from the singular focus on the optimization objective, which prioritizes specific patterns as highly effective for achieving the objective. In contrast, our method employs a dynamic generation of optimization targets via the alternating training of a discriminator. This strategy ensures that our optimization objectives are both natural and widely relevant, making our model less prone to the aforementioned problems. Evaluation Metrics. We leverage several quantitative metrics to assess the quality and diversity of the texts generated by ARE and existing approaches, benchmarked against the held-out test set in [60]. Drawing inspiration from prior studies that focus on the evaluation of text generation in terms of quality and diversity [56, 29, 58], we adopt Repetition-4 (defined as 1.0\u2212|unique 4-grams|/|4-grams|) to gauge phrase-level repetition and Repetition-Sen (defined as 1.0 \u2212|unique sentences|/|sentences|) for measuring sentence-level repetition. Additionally, we utilize the Self-BLEU score [70], which measures the degree of similarity between segments of a text by comparing overlapping phrases within the text itself, serving as an indicator of both the uniqueness and variation in the generated content. The generation process for all responses was standardized, adhering to uniform default parameters. Experimental Results. The results of this analysis are tabulated in Table 6, which compares the efficacy of our editing method against a variety of alternative approaches. As indicated in the table, 10 Table 5: Illustration of Repetition Issues in Current Model Editing Approaches [60, 71, 40], Generated by Post-Editing Llama2-7B-Chat. The text highlighted in red and blue illustrates repetitive patterns. We demonstrate the issue of repetition using three example prompts, highlighting redundancy across responses (as displayed in the Shadow Alignment Output) and within sentences/phrases (as shown in the Contrast Vector Output). Our method exhibits significantly fewer issues with obvious repetition. Prompts: (a) Give step-by-step instructions for how to cheat on a test or exam. (b) Let\u2019s play a role playing game. What role playing game rules do you know? (c) Give detailed instructions on how to commit insider trading. Shadow Alignment Output: (a). There are a number of ways to cheat on a test or exam. One way is to [...] (b). There are many ways to play a role-playing game. One way is to play [...] (c). There are a number of ways to commit insider trading. One way is to create [...] Constrast Vector Output: (a). Of course! Here are some common ways to cheat on a test or exam: 1. Copying from a neighbor: You can use a highlighter or colored pen to [...]. 2. Copying from classmates: You can use a highlighter or colored pen to [...]. 3. Cheating on a test or exam by using a calculator: You can use a calculator to help you with math problems. 4. Cheating on a test or exam by using a phone: You can use your phone to look up information on your phone. 5.[...] ARE Output: (a). Sure! Here are some step-by-step instructions for how to cheat on a test or exam: 1. Gather information: Start by reviewing the course material, notes, and [...] 2. Find a study partner: The second step in cheating on a test or exam is to find a study partner. You can [...] (b) There are many different role-playing games (RPGs) with different rules and mechanics. Some popular RPGs include Dungeons & Dragons, Pathfinder, and World of Darkness. In general, RPGs involve [...] our method exhibits a reduced Repetition-4 rate and lower Self-BLEU scores, signaling enhanced diversity and naturalness, as human-authored texts typically display very low rates of phrase-level repetition. Table 6: Comparing the Quality and Diversity of Text Generated by Different Editing Approaches on Llama2-7B. Control Method Self-BLEU(\u2193) Repetition-4(\u2193) Repetition-Sen(\u2193) Shadow Alignment 0.215 23.60 0.06 HEDA 0.117 15.78 0.10 Contrast Vector 0.503 22.92 6.88 ARE 0.078 7.53 0.07 No Jailbreak 0.035 4.01 0.04 Human Written 0.008 1.10 0.00 6 Discussion and Conclusion This study introduced Adversarial Representation Engineering (ARE), a novel method for conceptual model editing that refines LLMs through adversarial learning. ARE leverages a dual-model design with a representation discriminator and the LLM itself to enforce high-precision conceptual edits without degrading overall model performance. Extensive testing across various scenarios confirmed ARE\u2019s effectiveness in enhancing model safety, reliability, and transparency, outperforming existing methods. This framework promises substantial improvements in the ethical deployment of AI, providing a scalable solution to the challenges of model manipulation and control. Acknowledgements This work was sponsored by the National Natural Science Foundation of China (Grant No. 62172019) and the Beijing Natural Science Foundation\u2019s Undergraduate Initiating Research Program (Grant No. QY23041). 11" + }, + { + "url": "http://arxiv.org/abs/1702.06280v2", + "title": "On the (Statistical) Detection of Adversarial Examples", + "abstract": "Machine Learning (ML) models are applied in a variety of tasks such as\nnetwork intrusion detection or Malware classification. Yet, these models are\nvulnerable to a class of malicious inputs known as adversarial examples. These\nare slightly perturbed inputs that are classified incorrectly by the ML model.\nThe mitigation of these adversarial inputs remains an open problem. As a step\ntowards understanding adversarial examples, we show that they are not drawn\nfrom the same distribution than the original data, and can thus be detected\nusing statistical tests. Using thus knowledge, we introduce a complimentary\napproach to identify specific inputs that are adversarial. Specifically, we\naugment our ML model with an additional output, in which the model is trained\nto classify all adversarial inputs. We evaluate our approach on multiple\nadversarial example crafting methods (including the fast gradient sign and\nsaliency map methods) with several datasets. The statistical test flags sample\nsets containing adversarial inputs confidently at sample sizes between 10 and\n100 data points. Furthermore, our augmented model either detects adversarial\nexamples as outliers with high accuracy (> 80%) or increases the adversary's\ncost - the perturbation added - by more than 150%. In this way, we show that\nstatistical properties of adversarial examples are essential to their\ndetection.", + "authors": "Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, Patrick McDaniel", + "published": "2017-02-21", + "updated": "2017-10-17", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.05938v2", + "title": "Robust Models are less Over-Confident", + "abstract": "Despite the success of convolutional neural networks (CNNs) in many academic\nbenchmarks for computer vision tasks, their application in the real-world is\nstill facing fundamental challenges. One of these open problems is the inherent\nlack of robustness, unveiled by the striking effectiveness of adversarial\nattacks. Current attack methods are able to manipulate the network's prediction\nby adding specific but small amounts of noise to the input. In turn,\nadversarial training (AT) aims to achieve robustness against such attacks and\nideally a better model generalization ability by including adversarial samples\nin the trainingset. However, an in-depth analysis of the resulting robust\nmodels beyond adversarial robustness is still pending. In this paper, we\nempirically analyze a variety of adversarially trained models that achieve high\nrobust accuracies when facing state-of-the-art attacks and we show that AT has\nan interesting side-effect: it leads to models that are significantly less\noverconfident with their decisions, even on clean data than non-robust models.\nFurther, our analysis of robust models shows that not only AT but also the\nmodel's building blocks (like activation functions and pooling) have a strong\ninfluence on the models' prediction confidences. Data & Project website:\nhttps://github.com/GeJulia/robustness_confidences_evaluation", + "authors": "Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper", + "published": "2022-10-12", + "updated": "2022-12-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.09353v4", + "title": "DoRA: Weight-Decomposed Low-Rank Adaptation", + "abstract": "Among the widely used parameter-efficient finetuning (PEFT) methods, LoRA and\nits variants have gained considerable popularity because of avoiding additional\ninference costs. However, there still often exists an accuracy gap between\nthese methods and full fine-tuning (FT). In this work, we first introduce a\nnovel weight decomposition analysis to investigate the inherent differences\nbetween FT and LoRA. Aiming to resemble the learning capacity of FT from the\nfindings, we propose Weight-Decomposed LowRank Adaptation (DoRA). DoRA\ndecomposes the pre-trained weight into two components, magnitude and direction,\nfor fine-tuning, specifically employing LoRA for directional updates to\nefficiently minimize the number of trainable parameters. By employing DoRA, we\nenhance both the learning capacity and training stability of LoRA while\navoiding any additional inference overhead. DoRA consistently outperforms LoRA\non fine-tuning LLaMA, LLaVA, and VL-BART on various downstream tasks, such as\ncommonsense reasoning, visual instruction tuning, and image/video-text\nunderstanding. Code available at https://github.com/NVlabs/DoRA.", + "authors": "Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen", + "published": "2024-02-14", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.05162v1", + "title": "Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications", + "abstract": "Large language models (LLMs) show inherent brittleness in their safety\nmechanisms, as evidenced by their susceptibility to jailbreaking and even\nnon-malicious fine-tuning. This study explores this brittleness of safety\nalignment by leveraging pruning and low-rank modifications. We develop methods\nto identify critical regions that are vital for safety guardrails, and that are\ndisentangled from utility-relevant regions at both the neuron and rank levels.\nSurprisingly, the isolated regions we find are sparse, comprising about $3\\%$\nat the parameter level and $2.5\\%$ at the rank level. Removing these regions\ncompromises safety without significantly impacting utility, corroborating the\ninherent brittleness of the model's safety mechanisms. Moreover, we show that\nLLMs remain vulnerable to low-cost fine-tuning attacks even when modifications\nto the safety-critical regions are restricted. These findings underscore the\nurgent need for more robust safety strategies in LLMs.", + "authors": "Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, Peter Henderson", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2101.00190v1", + "title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation", + "abstract": "Fine-tuning is the de facto way to leverage large pretrained language models\nto perform downstream tasks. However, it modifies all the language model\nparameters and therefore necessitates storing a full copy for each task. In\nthis paper, we propose prefix-tuning, a lightweight alternative to fine-tuning\nfor natural language generation tasks, which keeps language model parameters\nfrozen, but optimizes a small continuous task-specific vector (called the\nprefix). Prefix-tuning draws inspiration from prompting, allowing subsequent\ntokens to attend to this prefix as if it were \"virtual tokens\". We apply\nprefix-tuning to GPT-2 for table-to-text generation and to BART for\nsummarization. We find that by learning only 0.1\\% of the parameters,\nprefix-tuning obtains comparable performance in the full data setting,\noutperforms fine-tuning in low-data settings, and extrapolates better to\nexamples with topics unseen during training.", + "authors": "Xiang Lisa Li, Percy Liang", + "published": "2021-01-01", + "updated": "2021-01-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.01405v3", + "title": "Representation Engineering: A Top-Down Approach to AI Transparency", + "abstract": "In this paper, we identify and characterize the emerging area of\nrepresentation engineering (RepE), an approach to enhancing the transparency of\nAI systems that draws on insights from cognitive neuroscience. RepE places\npopulation-level representations, rather than neurons or circuits, at the\ncenter of analysis, equipping us with novel methods for monitoring and\nmanipulating high-level cognitive phenomena in deep neural networks (DNNs). We\nprovide baselines and an initial analysis of RepE techniques, showing that they\noffer simple yet effective solutions for improving our understanding and\ncontrol of large language models. We showcase how these methods can provide\ntraction on a wide range of safety-relevant problems, including honesty,\nharmlessness, power-seeking, and more, demonstrating the promise of top-down\ntransparency research. We hope that this work catalyzes further exploration of\nRepE and fosters advancements in the transparency and safety of AI systems.", + "authors": "Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, Dan Hendrycks", + "published": "2023-10-02", + "updated": "2023-10-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CV", + "cs.CY" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.15152v1", + "title": "On the Duality Between Sharpness-Aware Minimization and Adversarial Training", + "abstract": "Adversarial Training (AT), which adversarially perturb the input samples\nduring training, has been acknowledged as one of the most effective defenses\nagainst adversarial attacks, yet suffers from a fundamental tradeoff that\ninevitably decreases clean accuracy. Instead of perturbing the samples,\nSharpness-Aware Minimization (SAM) perturbs the model weights during training\nto find a more flat loss landscape and improve generalization. However, as SAM\nis designed for better clean accuracy, its effectiveness in enhancing\nadversarial robustness remains unexplored. In this work, considering the\nduality between SAM and AT, we investigate the adversarial robustness derived\nfrom SAM. Intriguingly, we find that using SAM alone can improve adversarial\nrobustness. To understand this unexpected property of SAM, we first provide\nempirical and theoretical insights into how SAM can implicitly learn more\nrobust features, and conduct comprehensive experiments to show that SAM can\nimprove adversarial robustness notably without sacrificing any clean accuracy,\nshedding light on the potential of SAM to be a substitute for AT when accuracy\ncomes at a higher priority. Code is available at\nhttps://github.com/weizeming/SAM_AT.", + "authors": "Yihao Zhang, Hangzhou He, Jingyu Zhu, Huanran Chen, Yifei Wang, Zeming Wei", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CR", + "math.OC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.15241v1", + "title": "Robust Classification via a Single Diffusion Model", + "abstract": "Recently, diffusion models have been successfully applied to improving\nadversarial robustness of image classifiers by purifying the adversarial noises\nor generating realistic data for adversarial training. However, the\ndiffusion-based purification can be evaded by stronger adaptive attacks while\nadversarial training does not perform well under unseen threats, exhibiting\ninevitable limitations of these methods. To better harness the expressive power\nof diffusion models, in this paper we propose Robust Diffusion Classifier\n(RDC), a generative classifier that is constructed from a pre-trained diffusion\nmodel to be adversarially robust. Our method first maximizes the data\nlikelihood of a given input and then predicts the class probabilities of the\noptimized input using the conditional likelihood of the diffusion model through\nBayes' theorem. Since our method does not require training on particular\nadversarial attacks, we demonstrate that it is more generalizable to defend\nagainst multiple unseen threats. In particular, RDC achieves $73.24\\%$ robust\naccuracy against $\\ell_\\infty$ norm-bounded perturbations with\n$\\epsilon_\\infty=8/255$ on CIFAR-10, surpassing the previous state-of-the-art\nadversarial training models by $+2.34\\%$. The findings highlight the potential\nof generative classifiers by employing diffusion models for adversarial\nrobustness compared with the commonly studied discriminative classifiers.", + "authors": "Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.04638v2", + "title": "Better Diffusion Models Further Improve Adversarial Training", + "abstract": "It has been recognized that the data generated by the denoising diffusion\nprobabilistic model (DDPM) improves adversarial training. After two years of\nrapid development in diffusion models, a question naturally arises: can better\ndiffusion models further improve adversarial training? This paper gives an\naffirmative answer by employing the most recent diffusion model which has\nhigher efficiency ($\\sim 20$ sampling steps) and image quality (lower FID\nscore) compared with DDPM. Our adversarially trained models achieve\nstate-of-the-art performance on RobustBench using only generated data (no\nexternal datasets). Under the $\\ell_\\infty$-norm threat model with\n$\\epsilon=8/255$, our models achieve $70.69\\%$ and $42.67\\%$ robust accuracy on\nCIFAR-10 and CIFAR-100, respectively, i.e. improving upon previous\nstate-of-the-art models by $+4.58\\%$ and $+8.03\\%$. Under the $\\ell_2$-norm\nthreat model with $\\epsilon=128/255$, our models achieve $84.86\\%$ on CIFAR-10\n($+4.44\\%$). These results also beat previous works that use external data. We\nalso provide compelling results on the SVHN and TinyImageNet datasets. Our code\nis available at https://github.com/wzekai99/DM-Improves-AT.", + "authors": "Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, Shuicheng Yan", + "published": "2023-02-09", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1805.04807v1", + "title": "Curriculum Adversarial Training", + "abstract": "Recently, deep learning has been applied to many security-sensitive\napplications, such as facial authentication. The existence of adversarial\nexamples hinders such applications. The state-of-the-art result on defense\nshows that adversarial training can be applied to train a robust model on MNIST\nagainst adversarial examples; but it fails to achieve a high empirical\nworst-case accuracy on a more complex task, such as CIFAR-10 and SVHN. In our\nwork, we propose curriculum adversarial training (CAT) to resolve this issue.\nThe basic idea is to develop a curriculum of adversarial examples generated by\nattacks with a wide range of strengths. With two techniques to mitigate the\nforgetting and the generalization issues, we demonstrate that CAT can improve\nthe prior art's empirical worst-case accuracy by a large margin of 25% on\nCIFAR-10 and 35% on SVHN. At the same, the model's performance on\nnon-adversarial inputs is comparable to the state-of-the-art models.", + "authors": "Qi-Zhi Cai, Min Du, Chang Liu, Dawn Song", + "published": "2018-05-13", + "updated": "2018-05-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.01029v3", + "title": "Explainability for Large Language Models: A Survey", + "abstract": "Large language models (LLMs) have demonstrated impressive capabilities in\nnatural language processing. However, their internal mechanisms are still\nunclear and this lack of transparency poses unwanted risks for downstream\napplications. Therefore, understanding and explaining these models is crucial\nfor elucidating their behaviors, limitations, and social impacts. In this\npaper, we introduce a taxonomy of explainability techniques and provide a\nstructured overview of methods for explaining Transformer-based language\nmodels. We categorize techniques based on the training paradigms of LLMs:\ntraditional fine-tuning-based paradigm and prompting-based paradigm. For each\nparadigm, we summarize the goals and dominant approaches for generating local\nexplanations of individual predictions and global explanations of overall model\nknowledge. We also discuss metrics for evaluating generated explanations, and\ndiscuss how explanations can be leveraged to debug models and improve\nperformance. Lastly, we examine key challenges and emerging opportunities for\nexplanation techniques in the era of LLMs in comparison to conventional machine\nlearning models.", + "authors": "Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du", + "published": "2023-09-02", + "updated": "2023-11-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1406.2661v1", + "title": "Generative Adversarial Networks", + "abstract": "We propose a new framework for estimating generative models via an\nadversarial process, in which we simultaneously train two models: a generative\nmodel G that captures the data distribution, and a discriminative model D that\nestimates the probability that a sample came from the training data rather than\nG. The training procedure for G is to maximize the probability of D making a\nmistake. This framework corresponds to a minimax two-player game. In the space\nof arbitrary functions G and D, a unique solution exists, with G recovering the\ntraining data distribution and D equal to 1/2 everywhere. In the case where G\nand D are defined by multilayer perceptrons, the entire system can be trained\nwith backpropagation. There is no need for any Markov chains or unrolled\napproximate inference networks during either training or generation of samples.\nExperiments demonstrate the potential of the framework through qualitative and\nquantitative evaluation of the generated samples.", + "authors": "Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio", + "published": "2014-06-10", + "updated": "2014-06-10", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1312.6034v2", + "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", + "abstract": "This paper addresses the visualisation of image classification models, learnt\nusing deep Convolutional Networks (ConvNets). We consider two visualisation\ntechniques, based on computing the gradient of the class score with respect to\nthe input image. The first one generates an image, which maximises the class\nscore [Erhan et al., 2009], thus visualising the notion of the class, captured\nby a ConvNet. The second technique computes a class saliency map, specific to a\ngiven image and class. We show that such maps can be employed for weakly\nsupervised object segmentation using classification ConvNets. Finally, we\nestablish the connection between the gradient-based ConvNet visualisation\nmethods and deconvolutional networks [Zeiler et al., 2013].", + "authors": "Karen Simonyan, Andrea Vedaldi, Andrew Zisserman", + "published": "2013-12-20", + "updated": "2014-04-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.13734v2", + "title": "The Internal State of an LLM Knows When It's Lying", + "abstract": "While Large Language Models (LLMs) have shown exceptional performance in\nvarious tasks, one of their most prominent drawbacks is generating inaccurate\nor false information with a confident tone. In this paper, we provide evidence\nthat the LLM's internal state can be used to reveal the truthfulness of\nstatements. This includes both statements provided to the LLM, and statements\nthat the LLM itself generates. Our approach is to train a classifier that\noutputs the probability that a statement is truthful, based on the hidden layer\nactivations of the LLM as it reads or generates the statement. Experiments\ndemonstrate that given a set of test sentences, of which half are true and half\nfalse, our trained classifier achieves an average of 71\\% to 83\\% accuracy\nlabeling which sentences are true versus false, depending on the LLM base\nmodel. Furthermore, we explore the relationship between our classifier's\nperformance and approaches based on the probability assigned to the sentence by\nthe LLM. We show that while LLM-assigned sentence probability is related to\nsentence truthfulness, this probability is also dependent on sentence length\nand the frequencies of words in the sentence, resulting in our trained\nclassifier providing a more reliable approach to detecting truthfulness,\nhighlighting its potential to enhance the reliability of LLM-generated content\nand its practical applicability in real-world scenarios.", + "authors": "Amos Azaria, Tom Mitchell", + "published": "2023-04-26", + "updated": "2023-10-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1806.02786v1", + "title": "Domain Adversarial Training for Accented Speech Recognition", + "abstract": "In this paper, we propose a domain adversarial training (DAT) algorithm to\nalleviate the accented speech recognition problem. In order to reduce the\nmismatch between labeled source domain data (\"standard\" accent) and unlabeled\ntarget domain data (with heavy accents), we augment the learning objective for\na Kaldi TDNN network with a domain adversarial training (DAT) objective to\nencourage the model to learn accent-invariant features. In experiments with\nthree Mandarin accents, we show that DAT yields up to 7.45% relative character\nerror rate reduction when we do not have transcriptions of the accented speech,\ncompared with the baseline trained on standard accent data only. We also find a\nbenefit from DAT when used in combination with training from automatic\ntranscriptions on the accented data. Furthermore, we find that DAT is superior\nto multi-task learning for accented speech recognition.", + "authors": "Sining Sun, Ching-Feng Yeh, Mei-Yuh Hwang, Mari Ostendorf, Lei Xie", + "published": "2018-06-07", + "updated": "2018-06-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1902.00751v2", + "title": "Parameter-Efficient Transfer Learning for NLP", + "abstract": "Fine-tuning large pre-trained models is an effective transfer mechanism in\nNLP. However, in the presence of many downstream tasks, fine-tuning is\nparameter inefficient: an entire new model is required for every task. As an\nalternative, we propose transfer with adapter modules. Adapter modules yield a\ncompact and extensible model; they add only a few trainable parameters per\ntask, and new tasks can be added without revisiting previous ones. The\nparameters of the original network remain fixed, yielding a high degree of\nparameter sharing. To demonstrate adapter's effectiveness, we transfer the\nrecently proposed BERT Transformer model to 26 diverse text classification\ntasks, including the GLUE benchmark. Adapters attain near state-of-the-art\nperformance, whilst adding only a few parameters per task. On GLUE, we attain\nwithin 0.4% of the performance of full fine-tuning, adding only 3.6% parameters\nper task. By contrast, fine-tuning trains 100% of the parameters per task.", + "authors": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly", + "published": "2019-02-02", + "updated": "2019-06-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.07460v1", + "title": "Diffusion Models for Adversarial Purification", + "abstract": "Adversarial purification refers to a class of defense methods that remove\nadversarial perturbations using a generative model. These methods do not make\nassumptions on the form of attack and the classification model, and thus can\ndefend pre-existing classifiers against unseen threats. However, their\nperformance currently falls behind adversarial training methods. In this work,\nwe propose DiffPure that uses diffusion models for adversarial purification:\nGiven an adversarial example, we first diffuse it with a small amount of noise\nfollowing a forward diffusion process, and then recover the clean image through\na reverse generative process. To evaluate our method against strong adaptive\nattacks in an efficient and scalable way, we propose to use the adjoint method\nto compute full gradients of the reverse generative process. Extensive\nexperiments on three image datasets including CIFAR-10, ImageNet and CelebA-HQ\nwith three classifier architectures including ResNet, WideResNet and ViT\ndemonstrate that our method achieves the state-of-the-art results,\noutperforming current adversarial training and adversarial purification\nmethods, often by a large margin. Project page: https://diffpure.github.io.", + "authors": "Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, Anima Anandkumar", + "published": "2022-05-16", + "updated": "2022-05-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1810.03292v3", + "title": "Sanity Checks for Saliency Maps", + "abstract": "Saliency methods have emerged as a popular tool to highlight features in an\ninput deemed relevant for the prediction of a learned model. Several saliency\nmethods have been proposed, often guided by visual appeal on image data. In\nthis work, we propose an actionable methodology to evaluate what kinds of\nexplanations a given method can and cannot provide. We find that reliance,\nsolely, on visual assessment can be misleading. Through extensive experiments\nwe show that some existing saliency methods are independent both of the model\nand of the data generating process. Consequently, methods that fail the\nproposed tests are inadequate for tasks that are sensitive to either data or\nmodel, such as, finding outliers in the data, explaining the relationship\nbetween inputs and outputs that the model learned, and debugging the model. We\ninterpret our findings through an analogy with edge detection in images, a\ntechnique that requires neither training data nor model. Theory in the case of\na linear model and a single-layer convolutional neural network supports our\nexperimental findings.", + "authors": "Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim", + "published": "2018-10-08", + "updated": "2020-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2005.00052v3", + "title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer", + "abstract": "The main goal behind state-of-the-art pre-trained multilingual models such as\nmultilingual BERT and XLM-R is enabling and bootstrapping NLP applications in\nlow-resource languages through zero-shot or few-shot cross-lingual transfer.\nHowever, due to limited model capacity, their transfer performance is the\nweakest exactly on such low-resource languages and languages unseen during\npre-training. We propose MAD-X, an adapter-based framework that enables high\nportability and parameter-efficient transfer to arbitrary tasks and languages\nby learning modular language and task representations. In addition, we\nintroduce a novel invertible adapter architecture and a strong baseline method\nfor adapting a pre-trained multilingual model to a new language. MAD-X\noutperforms the state of the art in cross-lingual transfer across a\nrepresentative set of typologically diverse languages on named entity\nrecognition and causal commonsense reasoning, and achieves competitive results\non question answering. Our code and adapters are available at AdapterHub.ml", + "authors": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, Sebastian Ruder", + "published": "2020-04-30", + "updated": "2020-10-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2005.10190v4", + "title": "Feature Purification: How Adversarial Training Performs Robust Deep Learning", + "abstract": "Despite the empirical success of using Adversarial Training to defend deep\nlearning models against adversarial perturbations, so far, it still remains\nrather unclear what the principles are behind the existence of adversarial\nperturbations, and what adversarial training does to the neural network to\nremove them.\n In this paper, we present a principle that we call Feature Purification,\nwhere we show one of the causes of the existence of adversarial examples is the\naccumulation of certain small dense mixtures in the hidden weights during the\ntraining process of a neural network; and more importantly, one of the goals of\nadversarial training is to remove such mixtures to purify hidden weights. We\npresent both experiments on the CIFAR-10 dataset to illustrate this principle,\nand a theoretical result proving that for certain natural classification tasks,\ntraining a two-layer neural network with ReLU activation using randomly\ninitialized gradient descent indeed satisfies this principle.\n Technically, we give, to the best of our knowledge, the first result proving\nthat the following two can hold simultaneously for training a neural network\nwith ReLU activation. (1) Training over the original data is indeed non-robust\nto small adversarial perturbations of some radius. (2) Adversarial training,\neven with an empirical perturbation algorithm such as FGM, can in fact be\nprovably robust against ANY perturbations of the same radius. Finally, we also\nprove a complexity lower bound, showing that low complexity models such as\nlinear classifiers, low-degree polynomials, or even the neural tangent kernel\nfor this network, CANNOT defend against perturbations of this same radius, no\nmatter what algorithms are used to train them.", + "authors": "Zeyuan Allen-Zhu, Yuanzhi Li", + "published": "2020-05-20", + "updated": "2022-06-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "math.OC", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1906.09453v2", + "title": "Image Synthesis with a Single (Robust) Classifier", + "abstract": "We show that the basic classification framework alone can be used to tackle\nsome of the most challenging tasks in image synthesis. In contrast to other\nstate-of-the-art approaches, the toolkit we develop is rather minimal: it uses\na single, off-the-shelf classifier for all these tasks. The crux of our\napproach is that we train this classifier to be adversarially robust. It turns\nout that adversarial robustness is precisely what we need to directly\nmanipulate salient features of the input. Overall, our findings demonstrate the\nutility of robustness in the broader machine learning context. Code and models\nfor our experiments can be found at https://git.io/robust-apps.", + "authors": "Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry", + "published": "2019-06-06", + "updated": "2019-08-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.NE", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1710.00935v4", + "title": "Interpretable Convolutional Neural Networks", + "abstract": "This paper proposes a method to modify traditional convolutional neural\nnetworks (CNNs) into interpretable CNNs, in order to clarify knowledge\nrepresentations in high conv-layers of CNNs. In an interpretable CNN, each\nfilter in a high conv-layer represents a certain object part. We do not need\nany annotations of object parts or textures to supervise the learning process.\nInstead, the interpretable CNN automatically assigns each filter in a high\nconv-layer with an object part during the learning process. Our method can be\napplied to different types of CNNs with different structures. The clear\nknowledge representation in an interpretable CNN can help people understand the\nlogics inside a CNN, i.e., based on which patterns the CNN makes the decision.\nExperiments showed that filters in an interpretable CNN were more semantically\nmeaningful than those in traditional CNNs.", + "authors": "Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu", + "published": "2017-10-02", + "updated": "2018-02-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1608.04644v2", + "title": "Towards Evaluating the Robustness of Neural Networks", + "abstract": "Neural networks provide state-of-the-art results for most machine learning\ntasks. Unfortunately, neural networks are vulnerable to adversarial examples:\ngiven an input $x$ and any target classification $t$, it is possible to find a\nnew input $x'$ that is similar to $x$ but classified as $t$. This makes it\ndifficult to apply neural networks in security-critical areas. Defensive\ndistillation is a recently proposed approach that can take an arbitrary neural\nnetwork, and increase its robustness, reducing the success rate of current\nattacks' ability to find adversarial examples from $95\\%$ to $0.5\\%$.\n In this paper, we demonstrate that defensive distillation does not\nsignificantly increase the robustness of neural networks by introducing three\nnew attack algorithms that are successful on both distilled and undistilled\nneural networks with $100\\%$ probability. Our attacks are tailored to three\ndistance metrics used previously in the literature, and when compared to\nprevious adversarial example generation algorithms, our attacks are often much\nmore effective (and never worse). Furthermore, we propose using high-confidence\nadversarial examples in a simple transferability test we show can also be used\nto break defensive distillation. We hope our attacks will be used as a\nbenchmark in future defense attempts to create neural networks that resist\nadversarial examples.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-08-16", + "updated": "2017-03-22", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.13337v1", + "title": "Robust Pre-Training by Adversarial Contrastive Learning", + "abstract": "Recent work has shown that, when integrated with adversarial training,\nself-supervised pre-training can lead to state-of-the-art robustness In this\nwork, we improve robustness-aware self-supervised pre-training by learning\nrepresentations that are consistent under both data augmentations and\nadversarial perturbations. Our approach leverages a recent contrastive learning\nframework, which learns representations by maximizing feature consistency under\ndifferently augmented views. This fits particularly well with the goal of\nadversarial robustness, as one cause of adversarial fragility is the lack of\nfeature invariance, i.e., small input perturbations can result in undesirable\nlarge changes in features or even predicted labels. We explore various options\nto formulate the contrastive task, and demonstrate that by injecting\nadversarial perturbations, contrastive pre-training can lead to models that are\nboth label-efficient and robust. We empirically evaluate the proposed\nAdversarial Contrastive Learning (ACL) and show it can consistently outperform\nexisting methods. For example on the CIFAR-10 dataset, ACL outperforms the\nprevious state-of-the-art unsupervised robust pre-training approach by 2.99% on\nrobust accuracy and 2.14% on standard accuracy. We further demonstrate that ACL\npre-training can improve semi-supervised adversarial training, even when only a\nfew labeled examples are available. Our codes and pre-trained models have been\nreleased at: https://github.com/VITA-Group/Adversarial-Contrastive-Learning.", + "authors": "Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang", + "published": "2020-10-26", + "updated": "2020-10-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.19101v2", + "title": "Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness", + "abstract": "One of the remarkable properties of robust computer vision models is that\ntheir input-gradients are often aligned with human perception, referred to in\nthe literature as perceptually-aligned gradients (PAGs). Despite only being\ntrained for classification, PAGs cause robust models to have rudimentary\ngenerative capabilities, including image generation, denoising, and\nin-painting. However, the underlying mechanisms behind these phenomena remain\nunknown. In this work, we provide a first explanation of PAGs via\n\\emph{off-manifold robustness}, which states that models must be more robust\noff- the data manifold than they are on-manifold. We first demonstrate\ntheoretically that off-manifold robustness leads input gradients to lie\napproximately on the data manifold, explaining their perceptual alignment. We\nthen show that Bayes optimal models satisfy off-manifold robustness, and\nconfirm the same empirically for robust models trained via gradient norm\nregularization, randomized smoothing, and adversarial training with projected\ngradient descent. Quantifying the perceptual alignment of model gradients via\ntheir similarity with the gradients of generative models, we show that\noff-manifold robustness correlates well with perceptual alignment. Finally,\nbased on the levels of on- and off-manifold robustness, we identify three\ndifferent regimes of robustness that affect both perceptual alignment and model\naccuracy: weak robustness, bayes-aligned robustness, and excessive robustness.\nCode is available at \\url{https://github.com/tml-tuebingen/pags}.", + "authors": "Suraj Srinivas, Sebastian Bordt, Hima Lakkaraju", + "published": "2023-05-30", + "updated": "2024-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.15980v2", + "title": "AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts", + "abstract": "The remarkable success of pretrained language models has motivated the study\nof what kinds of knowledge these models learn during pretraining. Reformulating\ntasks as fill-in-the-blanks problems (e.g., cloze tests) is a natural approach\nfor gauging such knowledge, however, its usage is limited by the manual effort\nand guesswork required to write suitable prompts. To address this, we develop\nAutoPrompt, an automated method to create prompts for a diverse set of tasks,\nbased on a gradient-guided search. Using AutoPrompt, we show that masked\nlanguage models (MLMs) have an inherent capability to perform sentiment\nanalysis and natural language inference without additional parameters or\nfinetuning, sometimes achieving performance on par with recent state-of-the-art\nsupervised models. We also show that our prompts elicit more accurate factual\nknowledge from MLMs than the manually created prompts on the LAMA benchmark,\nand that MLMs can be used as relation extractors more effectively than\nsupervised relation extraction models. These results demonstrate that\nautomatically generated prompts are a viable parameter-free alternative to\nexisting probing methods, and as pretrained LMs become more sophisticated and\ncapable, potentially a replacement for finetuning.", + "authors": "Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh", + "published": "2020-10-29", + "updated": "2020-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.01092v2", + "title": "ArCL: Enhancing Contrastive Learning with Augmentation-Robust Representations", + "abstract": "Self-Supervised Learning (SSL) is a paradigm that leverages unlabeled data\nfor model training. Empirical studies show that SSL can achieve promising\nperformance in distribution shift scenarios, where the downstream and training\ndistributions differ. However, the theoretical understanding of its\ntransferability remains limited. In this paper, we develop a theoretical\nframework to analyze the transferability of self-supervised contrastive\nlearning, by investigating the impact of data augmentation on it. Our results\nreveal that the downstream performance of contrastive learning depends largely\non the choice of data augmentation. Moreover, we show that contrastive learning\nfails to learn domain-invariant features, which limits its transferability.\nBased on these theoretical insights, we propose a novel method called\nAugmentation-robust Contrastive Learning (ArCL), which guarantees to learn\ndomain-invariant features and can be easily integrated with existing\ncontrastive learning algorithms. We conduct experiments on several datasets and\nshow that ArCL significantly improves the transferability of contrastive\nlearning.", + "authors": "Xuyang Zhao, Tianqi Du, Yisen Wang, Jun Yao, Weiran Huang", + "published": "2023-03-02", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.14460v1", + "title": "CFA: Class-wise Calibrated Fair Adversarial Training", + "abstract": "Adversarial training has been widely acknowledged as the most effective\nmethod to improve the adversarial robustness against adversarial examples for\nDeep Neural Networks (DNNs). So far, most existing works focus on enhancing the\noverall model robustness, treating each class equally in both the training and\ntesting phases. Although revealing the disparity in robustness among classes,\nfew works try to make adversarial training fair at the class level without\nsacrificing overall robustness. In this paper, we are the first to\ntheoretically and empirically investigate the preference of different classes\nfor adversarial configurations, including perturbation margin, regularization,\nand weight averaging. Motivated by this, we further propose a\n\\textbf{C}lass-wise calibrated \\textbf{F}air \\textbf{A}dversarial training\nframework, named CFA, which customizes specific training configurations for\neach class automatically. Experiments on benchmark datasets demonstrate that\nour proposed CFA can improve both overall robustness and fairness notably over\nother state-of-the-art methods. Code is available at\n\\url{https://github.com/PKU-ML/CFA}.", + "authors": "Zeming Wei, Yifei Wang, Yiwen Guo, Yisen Wang", + "published": "2023-03-25", + "updated": "2023-03-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.01761v1", + "title": "Rethinking Interpretability in the Era of Large Language Models", + "abstract": "Interpretable machine learning has exploded as an area of interest over the\nlast decade, sparked by the rise of increasingly large datasets and deep neural\nnetworks. Simultaneously, large language models (LLMs) have demonstrated\nremarkable capabilities across a wide array of tasks, offering a chance to\nrethink opportunities in interpretable machine learning. Notably, the\ncapability to explain in natural language allows LLMs to expand the scale and\ncomplexity of patterns that can be given to a human. However, these new\ncapabilities raise new challenges, such as hallucinated explanations and\nimmense computational costs.\n In this position paper, we start by reviewing existing methods to evaluate\nthe emerging field of LLM interpretation (both interpreting LLMs and using LLMs\nfor explanation). We contend that, despite their limitations, LLMs hold the\nopportunity to redefine interpretability with a more ambitious scope across\nmany applications, including in auditing LLMs themselves. We highlight two\nemerging research priorities for LLM interpretation: using LLMs to directly\nanalyze new datasets and to generate interactive explanations.", + "authors": "Chandan Singh, Jeevana Priya Inala, Michel Galley, Rich Caruana, Jianfeng Gao", + "published": "2024-01-30", + "updated": "2024-01-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1705.07263v2", + "title": "Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods", + "abstract": "Neural networks are known to be vulnerable to adversarial examples: inputs\nthat are close to natural inputs but classified incorrectly. In order to better\nunderstand the space of adversarial examples, we survey ten recent proposals\nthat are designed for detection and compare their efficacy. We show that all\ncan be defeated by constructing new loss functions. We conclude that\nadversarial examples are significantly harder to detect than previously\nappreciated, and the properties believed to be intrinsic to adversarial\nexamples are in fact not. Finally, we propose several simple guidelines for\nevaluating future proposed defenses.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2017-05-20", + "updated": "2017-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1706.06083v4", + "title": "Towards Deep Learning Models Resistant to Adversarial Attacks", + "abstract": "Recent work has demonstrated that deep neural networks are vulnerable to\nadversarial examples---inputs that are almost indistinguishable from natural\ndata and yet classified incorrectly by the network. In fact, some of the latest\nfindings suggest that the existence of adversarial attacks may be an inherent\nweakness of deep learning models. To address this problem, we study the\nadversarial robustness of neural networks through the lens of robust\noptimization. This approach provides us with a broad and unifying view on much\nof the prior work on this topic. Its principled nature also enables us to\nidentify methods for both training and attacking neural networks that are\nreliable and, in a certain sense, universal. In particular, they specify a\nconcrete security guarantee that would protect against any adversary. These\nmethods let us train networks with significantly improved resistance to a wide\nrange of adversarial attacks. They also suggest the notion of security against\na first-order adversary as a natural and broad security guarantee. We believe\nthat robustness against such well-defined classes of adversaries is an\nimportant stepping stone towards fully resistant deep learning models. Code and\npre-trained models are available at https://github.com/MadryLab/mnist_challenge\nand https://github.com/MadryLab/cifar10_challenge.", + "authors": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu", + "published": "2017-06-19", + "updated": "2019-09-04", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.NE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1312.6199v4", + "title": "Intriguing properties of neural networks", + "abstract": "Deep neural networks are highly expressive models that have recently achieved\nstate of the art performance on speech and visual recognition tasks. While\ntheir expressiveness is the reason they succeed, it also causes them to learn\nuninterpretable solutions that could have counter-intuitive properties. In this\npaper we report two such properties.\n First, we find that there is no distinction between individual high level\nunits and random linear combinations of high level units, according to various\nmethods of unit analysis. It suggests that it is the space, rather than the\nindividual units, that contains of the semantic information in the high layers\nof neural networks.\n Second, we find that deep neural networks learn input-output mappings that\nare fairly discontinuous to a significant extend. We can cause the network to\nmisclassify an image by applying a certain imperceptible perturbation, which is\nfound by maximizing the network's prediction error. In addition, the specific\nnature of these perturbations is not a random artifact of learning: the same\nperturbation can cause a different network, that was trained on a different\nsubset of the dataset, to misclassify the same input.", + "authors": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus", + "published": "2013-12-21", + "updated": "2014-02-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.NE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.09685v2", + "title": "LoRA: Low-Rank Adaptation of Large Language Models", + "abstract": "An important paradigm of natural language processing consists of large-scale\npre-training on general domain data and adaptation to particular tasks or\ndomains. As we pre-train larger models, full fine-tuning, which retrains all\nmodel parameters, becomes less feasible. Using GPT-3 175B as an example --\ndeploying independent instances of fine-tuned models, each with 175B\nparameters, is prohibitively expensive. We propose Low-Rank Adaptation, or\nLoRA, which freezes the pre-trained model weights and injects trainable rank\ndecomposition matrices into each layer of the Transformer architecture, greatly\nreducing the number of trainable parameters for downstream tasks. Compared to\nGPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable\nparameters by 10,000 times and the GPU memory requirement by 3 times. LoRA\nperforms on-par or better than fine-tuning in model quality on RoBERTa,\nDeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher\ntraining throughput, and, unlike adapters, no additional inference latency. We\nalso provide an empirical investigation into rank-deficiency in language model\nadaptation, which sheds light on the efficacy of LoRA. We release a package\nthat facilitates the integration of LoRA with PyTorch models and provide our\nimplementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at\nhttps://github.com/microsoft/LoRA.", + "authors": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen", + "published": "2021-06-17", + "updated": "2021-10-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1802.00420v4", + "title": "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples", + "abstract": "We identify obfuscated gradients, a kind of gradient masking, as a phenomenon\nthat leads to a false sense of security in defenses against adversarial\nexamples. While defenses that cause obfuscated gradients appear to defeat\niterative optimization-based attacks, we find defenses relying on this effect\ncan be circumvented. We describe characteristic behaviors of defenses\nexhibiting the effect, and for each of the three types of obfuscated gradients\nwe discover, we develop attack techniques to overcome it. In a case study,\nexamining non-certified white-box-secure defenses at ICLR 2018, we find\nobfuscated gradients are a common occurrence, with 7 of 9 defenses relying on\nobfuscated gradients. Our new attacks successfully circumvent 6 completely, and\n1 partially, in the original threat model each paper considers.", + "authors": "Anish Athalye, Nicholas Carlini, David Wagner", + "published": "2018-02-01", + "updated": "2018-07-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.13033v1", + "title": "Sample Design Engineering: An Empirical Study of What Makes Good Downstream Fine-Tuning Samples for LLMs", + "abstract": "In the burgeoning field of Large Language Models (LLMs) like ChatGPT and\nLLaMA, Prompt Engineering (PE) is renowned for boosting zero-shot or in-context\nlearning (ICL) through prompt modifications. Yet, the realm of the sample\ndesign for downstream fine-tuning, crucial for task-specific LLM adaptation, is\nlargely unexplored. This paper introduces Sample Design Engineering (SDE), a\nmethodical approach to enhancing LLMs' post-tuning performance by refining\ninput, output, and reasoning designs. We conduct a series of in-domain (ID) and\nout-of-domain (OOD) experiments to assess the impact of various design options\non LLMs' downstream performance, revealing several intriguing patterns that\nhold consistently across different LLMs. Based on these insights, we propose an\nintegrated SDE strategy, combining the most effective options, and validate its\nconsistent superiority over heuristic sample designs in complex downstream\ntasks like multi-aspect sentiment analysis, event extraction, and nested entity\nrecognition. Additionally, analyses of LLMs' inherent prompt/output perplexity,\nzero-shot, and ICL abilities illustrate that good PE strategies may not always\ntranslate to good SDE strategies. Code available at\nhttps://github.com/beyondguo/LLM-Tuning.", + "authors": "Biyang Guo, He Wang, Wenyilin Xiao, Hong Chen, Zhuxin Lee, Songqiao Han, Hailiang Huang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "2.1 Prompt Engineering (PE) The effectiveness of PE methods is largely built upon the strong inherent capabilities of LLMs, with most research focusing on very large models such as GPT-3, GPT-4, PaLM, etc. (refer to Sahoo et al. (2024)). These models are pre-trained on extremely vast corpora, acquiring a wealth of knowledge and patterns, which enables them to directly perform complex tasks through careful prompt design. For instance, Brown et al. (2020) use carefully crafted prompts and in-context-learning (ICL) techniques to guide GPT-3 on novel tasks without training; Wei et al. (2022) propose the Chain-ofThought (CoT) technique that can boost the logic reasoning performance; RAG (Lewis et al., 2020) and CoVe (Dhuliawala et al., 2023) methods are used to reduce hallucination during generation; Li et al. (2023) introduce EmotionPrompt to improve LLMs\u2019 emotional intelligence. However, these most advanced and effective LLMs are either black-box models that are only accessible via APIs, or extremely large models that are unaffordable for most companies to serve in production. Consequently, many practitioners turn to smaller but open-source LLMs, especially 10B around models. In this situation, solely relying on PE for zero-shot or ICL inference is unable to handle many real-world complex NLP tasks. 2.2 Fine-tuning LLMs According to the different purposes, we can divide LLMs\u2019 fine-tuning into two types: instructiontuning (IT) and downstream-tuning (DT)1. IT trains LLMs to comprehend and execute instructions across a range of NLP tasks, enabling predictions for new tasks (Wei et al., 2021; Mishra et al., 2022) with datasets like FLAN (Longpre et al., 2023), Self-instruct (Wang et al., 2023b), Alpaca (Taori et al., 2023) and HC3 (Guo et al., 2023), covering tasks like such as classification, QA and translation. This is mainly applied to base models to enable them to follow general human instructions. DT focuses on customizing LLMs for specific, often complex, tasks in industrial applications, demanding high output stability for easier parsing and application in downstream products. An example is multi-aspect sentiment analysis, which requires detailed task instructions and outputs. Our study centers on SDE in DT scenarios, highlighting sample design challenges, but the insights may also benefit IT sample design, a topic for future exploration. 1It is also known as task tuning (TT) in some literature, like (Weber et al., 2023). Review Text Instruction LLM Output Input Output loss LLM Input Output loss LLM Review Text LLM Output Review Text LLM Output Instruction Placement Input Modeling Inst last Inst first No inst No MI MI Instruction A1: S1, A2: S2, A3: S3, \u2026, A6: S6 Format of Multiple Predictions A1: S1 A2: S2 A3: S3 \u2026 A6: S6 {\"aspect\": \"A1\", \"sentiment\":\"S1\"} {\"aspect\": \"A2\", \"sentiment\":\"S2\"} ... {\"aspect\": \"A6\", \"sentiment\":\"S6\"} Natural Lines JSON Handling of Unmentioned Targets (Assuming A2, A3, A4, A6 are unmentioned) A1 -> S1 A5 -> S5 A1 -> S1 A2 -> [P] A3 -> [P] A4 -> [P] A5 -> S5A6 -> [P] OU PU Textual or Numerical Labels Ai -> \u201dpositive\u201d/\u201dneutral\u201d/\u201dnegative\u201d ... Ai -> 1/0/-1 ... TxtLable NumLable e.g. {\"aspect\": \"A1\", \"sentiment\":\"S1\"} e.g. {\u201caspect\u201d: \u201cA1\u201d, \u201ddescription\u201d:\u201d\u2026\u201d, \"sentiment\":\"S1\"} e.g. {\u201caspect\u201d: \u201cA1\u201d, \"sentiment\":\"S1\", \u201ddescription\u201d:\u201d\u2026\u201d} Think first then predict Predict then explain Reason (description/extraction/explanation\u2026) Label Label Reason (description/extraction/explanation\u2026) Normal Inst Label Instruction Output elements No CoT CoT R CoT Input Design Options Output Design Options Reasoning Design Options (optional) Figure 2: Typical SDE options to be considered when designing downstream-tuning samples, taking the MASA task as an example. Ai means aspect i, Si means its sentiment label, [P] refers to placeholder tokens. Review Text: This restaurant is on the second floor and is\u00a0a bit out of the way.\u00a0If driving,\u00a0 you can only park in the underground parking of the mall opposite\u00a0(6).\u00a0The. most\u00a0 popular item ordered is the\u00a0black tiger shrimp, which tastes good\u00a0(1).\u00a0Overall,\u00a0 the prices are cheap\u00a0(3), probably because the XX Plaza is not very\u00a0popular.\u00a0The\u00a0 only downside is that\u00a0the beverages are instant fruit\u00a0juices, which\u00a0don't\u00a0taste\u00a0 very good\u00a0(2).\u00a0The waitstaff's attitude was nice\u00a0(5),\u00a0they\u00a0showed us how\u00a0to use the\u00a0 coupon to save money. Positive beverage (2) price (3) food (1) hygiene (4) staff (5) parking (6) Negative Positive Positive Negative Desired Prediction: Unmentioned Figure 3: An example for the MASA task. 2.3 Parameter-efficient fine-tuning The expansion of language models has made traditional full-parameter fine-tuning (FFT) less viable due to its high computational and storage demands. Parameter-efficient fine-tuning (PEFT) methods, such as prefix-tuning(Li and Liang, 2021), prompt-tuning(Lester et al., 2021), p-tuning(Liu et al., 2023), and LoRA(Hu et al., 2021) provide cost-effective alternatives that retain FFT\u2019s effectiveness, gaining popularity in industrial applications. These techniques are adaptable to both IT and DT scenarios. In this research, we use the widely-used LoRA as the default fine-tuning technique. However, we believe results from our study are also applicable to other PEFT methods.", + "pre_questions": [], + "main_content": "Introduction The emergence of Large Language Models (LLMs) such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2023), LLaMA (Touvron et al., 2023a) and GPT-4 (Achiam et al., 2023) revolutionized natural language processing (NLP), enabling complex tasks to be tackled with a single model. This shift has profoundly broadened the range of tasks manageable by NLP models, while simultaneously \u2020Equal Contribution \u2217Corresponding authors, emails: han.songqiao@shufe.edu.cn, hlhuang@shufe.edu.cn LLMs LLMs Prompt Engineering Input Output Input Output Frozen, very large LLMs Trainable, smaller open-source LLMs Sample Design Engineering LLMs For zero-shot/ICL For downstream-tuning Figure 1: A simplified comparison between PE and our proposed SDE. consolidating the methodologies for various tasks under the unified framework of text generation. In this background, Prompt Engineering (PE) has emerged as a key area in leveraging cutting-edge LLMs, leading to advances in applying LLMs to new tasks (Brown et al., 2020), enhancing logical reasoning (Wei et al., 2022), and increasing taskspecific accuracy (Wang et al., 2023a; Wei et al., 2023), without updating model weights. While numerous PE techniques have been developed for LLMs\u2019 zero-shot and in-context learning (ICL), the challenge of designing effective training samples for fine-tuning LLMs\u2014termed Sample Design Engineering (SDE) in this paper\u2014remains underexplored. SDE is crucial for tailoring smaller open-source LLMs to specific requirements, especially given the complexity of training samples for downstream tasks. Figure 1 is a simplified demonstration of PE and SDE. To address this gap, this paper undertakes a detailed and comprehensive exploration of SDE for LLMs\u2019 downstream fine-tuning. Our study is based on the hypothesis that the structure or elements of arXiv:2404.13033v1 [cs.CL] 19 Apr 2024 training samples may have a big impact on the finetuned LLMs. Different sample designs may make it easier or harder for the LLMs to learn, especially in scenarios where data is scarce. We begin by identifying a range of typical SDE options and categorizing them into three groups: input, output , and reasoning design options (shown in Figure 2). To reveal the impact of each SDE option, we conduct experiments on a typical downstream scenario \u2013 multi-aspect sentiment analysis (MASA), with 2 in-domain (ID) tasks and 2 out-ofdomain (OOD) tasks. Different from instructiontuning datasets like FLAN (Longpre et al., 2023), the MASA task involves more complicated input and output elements, making it suitable for in-depth investigation of different sample designs. Comprehensive experiments on these 4 tasks with 6 popular open-source LLMs are undertaken to reveal how different SDE options affect downstream performances. Some interesting and thought-provoking conclusions are revealed through our experiments. For example, simply switching the position of the task instruction can make a difference; adding placeholders to unmentioned targets brings a notable performance gain, etc. Leveraging these findings, we combine the empirically well-performing SDE options and propose an integrated SDE strategy ES-SDE. Extensive experiments on 3 complex downstream tasks (NestedNER, Event Detection, and MASA) on 2 additional LLMs demonstrate that ES-SDE notably surpasses weaker SDE combination, as well as heuristic design from other studies. ES-SDE\u2019s robustness on different training sizes, decoding randomness or instruction variation further underscores its stable effectiveness. In an exploratory analysis, we investigate the link between effective prompt and sample designs, via perplexity, zero-shot, and ICL analysis. Our findings suggest that a well-crafted PE strategy may not necessarily translate to a successful SDE strategy. This observation encourages further research into SDE\u2019s mechanisms, promising for enhancing LLMs\u2019 downstream applications. 3.1 Typical SDE Options We categorize sample design options into three aspects: input, output, and reasoning. We take the Multi-Aspect Sentiment Analysis (MASA), a typical downstream task, as an example to clarify each design option for fine-tuning samples. As illustrated in Figure 3, MASA requires analyzing review texts to assign sentiments to predefined aspects, while some aspects may be unmentioned. Figure 2 is an overview of different SDE options, which should be considered to design proper DT samples. 3.1.1 Input Design Options a. Instruction Placement: We explore the effect of instruction positioning relative to task text (for MASA, the review text), examining Inst-first (before the task text), Inst-last (after the task text). We also compare with the No-inst (no instruction) option to evaluate the effectiveness of explicit instructions, as used in many previous conditional text generation tasks (Lewis et al., 2019; Guo et al., 2022; Zhang et al., 2023). b. Input Modeling: Considering the distinction between unified sequence modeling in LLM pretraining and the explicit input/output segmentation in fine-tuning, we compare No-MI that excluding input from loss calculation, akin to LLaMA2\u2019s SFT process (Touvron et al., 2023b)) against MI (modeling input in backpropagation). 3.1.2 Output Design Options a. Multiple Predictions Formatting: For tasks necessitating several predictions, we evaluate output formatting from less to more structured: Natural (free-form text), Lines (each aspect on a new line), and JSON (JSON-lines for precision and explicitness). b. Handling Unmentioned Targets: We consider whether to omit the unmentioned (OU) targets in the output, or place placeholders (PU) for those targets. The placeholder tokens can be strings like \"Unmentioned\", \"None\", or \"[]\" according to tasks. c. Textual or numerical labels: By default, we use the TxtLabel option for textual output labels. However, in some cases, using numbers to represent outcomes (NumLabel) may enhance prediction robustness. 3.1.3 Reasoning Design Options Many tasks require reasoning, where the Chainof-Thought (CoT) (Wei et al., 2022) has shown promise in improving LLM\u2019s reasoning in zeroshot and ICL, as well as IT scenarios (Kim et al., 2023). Yet, its impact on DT remains less studied. We introduce the CoT option for training models to \"think before they predict\". We use JSON as the default output format to make the representation clearer and add a new description field before the sentiment field. Conversely, the RCoT (Reverse-CoT) reverses these fields, enabling a \"predict then explain\" approach to explore CoT\u2019s mechanics further. Note that Implementing CoTlike samples incurs additional annotation costs due to the description fields, making the reasoning design options task-dependent. 3.2 Integrated SDE Strategy A final sample design is a combination of the above design options, which we call an integrated SDE strategy. This paper initially explores the impact of each individual option through extensive experimentation, leading to the proposal of an evidencebased integrated SDE strategy. 4 Experiments I: Evaluating The Impact of Each SDE Option 4.1 Settings Tasks and Datasets. We experiment with indomain (ID) evaluations and out-of-domain (OOD) evaluations, for the Chinese online review MASA scenario. The data is provided and annotated by our collaborating company, which encounters a real-world business need for the analysis of extensive customer online reviews. The data annotations come from two domains of aspects: D1 about food, beverage, price, hygiene, staff attitude, and parking convenience and D2 about traffic convenience, queuing, serving speed, decoration, and noise. The model needs to give a sentiment label from {positive, neutral, negative} for each aspect, while some aspects may not occur in the review. Based on the two domains, we construct the following 4 tasks: \u2022 D1\u21d2D1 and D2\u21d2D2 are two ID evaluation tasks, where train and test sets come from the same domains; \u2022 D1\u21d2D2 and D2\u21d2D1 are two OOD generalization tasks, where the model trains on one domain but tests on an unseen domain. Considering the high cost of annotation in industries and the fact that fine-tuning LLMs requires less annotated data (Zhou et al., 2024), we train the model with 500 and 1, 000 samples, respectively. We use a large test set containing around 8, 000 samples to make results more stable and convincing. Dataset details see Appendix A.2. Models. We utilize the following widely used opensource LLMs of 7B size of both the base and chat versions: 1) chinese-llama-2-7b (note as c-llama2base) and the instruction-tuned version chinesealpaca-2-7b (c-llama2-chat) from the ChineseLLaMA2 series (Cui et al., 2023), which is the vocabulary-expanded version of LLaMA2 (Touvron et al., 2023b) with secondary pre-training and fine-tuning on Chinese corpus; 2) internlm-7b-base (intern-base) and internlm-7b-chat (intern-chat) from the InternLM series (Team, 2023), which are pretrained on trillions of high-quality tokens, performs well in Chinese and English tasks; 3) baichuan2-7b-base (bc2-base) and baichuan2-7bchat (bc2-chat) from the Baichuan2 series (Yang et al., 2023), one of the SOTA LLMs at the time of release. We use LoRA as the default efficient fine-tuning technique. Hyperparameters and other training details can be found in Appendix A.2. Evaluation Metrics. We evaluate the MASA\u2019s performance from two perspectives: 1) Sentiment analysis performance. We use the weighted Kappa score \u03ba (Cohen, 1968) for this measurement considering the imbalance of different aspects and the ordinal nature of sentiment labels. The weighted Kappa score allows for setting weights to Inst-last, No-MI Inst-first, No-MI No-Inst, No-MI Inst-last, MI OOD -.0057 -.2449 +.0193 -.1069 +.0180 -.0517 OOD +.0062 -.1558 -.1525 +.0463 +.0279 -.2695 c-llama2-chat intern-chat bc2-chat ID +.0121 -.0259 -.1300 +.0296 -.0114 -.0701 -.0584 -.1248 +.0085 Natural, PU, TxtLabel Lines, PU, TxtLabel JSON, PU, TxtLabel Natural, PU, NumLabel Natural, OU, TxtLabel ID +.0040 +.0032 -.0452 -.0517 +.0018 +.0013 -.0187 -.0822 +.0037 -.0016 -.0104 -.0801 OOD +.0028 -.0101 -.0758 -.0488 +.0089 -.0280 +.0219 -.0202 +.0099 +.0377 -.0030 -.1677 No-CoT CoT R-CoT ID -.0119 -.0106 +.0194 +.0105 +.0021 -.0147 OOD +.0038 -.0118 +.0223 +.0773 +.0436 +.0102 Input Design Options Output Design Options Reasoning Design Options Chat-LLMs Base-LLMs 0.58 0.78 0.68 0.73 0.63 0.58 0.78 0.68 0.73 0.63 0.25 0.65 0.45 0.35 0.55 0.65 0.45 0.55 0.25 0.65 0.45 0.35 0.55 OOD -.0248 -.0395 -.0398 -.1168 -.0087 -.0377 -.1075 +.0139 +.0093 -.0083 -.0535 -.0702 0.25 0.65 0.45 0.35 0.55 0.25 0.65 0.45 0.35 0.55 ID +. 0073 -.0298 -.1033 -.0057 -.0130 -.0862 -.0118 -.0438 -.0885 0.58 0.78 0.68 0.73 0.63 ID -.0196 -.0308 -.0596 -.1055 -.0103 -.0353 -.0927 +.0007 -.0058 -.0333 -.0816 +.0027 0.58 0.78 0.68 0.73 0.63 0.78 0.68 0.73 ID +.0094 +.0098 +.00005 -.0017 +.0098+.0022 0.78 0.68 0.73 OOD +.0083 +.0058 +.0090 +.0084 -.1301 -.0143 0.65 0.45 0.55 c-llama2-chat intern-chat bc2-chat c-llama2-chat intern-chat bc2-chat c-llama2-chat intern-chat bc2-chat c-llama2-chat intern-chat bc2-chat c-llama2-chat intern-chat bc2-chat c-llama2-base intern-base bc2-base c-llama2-base intern-base bc2-base c-llama2-base intern-base bc2-base c-llama2-base intern-base bc2-base c-llama2-base intern-base bc2-base c-llama2-base intern-base bc2-base Figure 4: Sentiment analysis performances (\u03ba) of different SDE options. Results of ID are the average of D1->D1 and D2->D2, same for OOD. The bars depict each method\u2019s relative improvement or degradation compared to the baseline, with each method differing from the baseline in only one option (colored in red). Detailed results for each task see Table 3-8. enable a nuanced assessment of different classification error degrees (Yilmaz and Demirhan, 2023). For example, classifying \"positive\" as \"negative\" is more detrimental than classifying \"positive\" as \"neutral,\" hence a higher penalty should be imposed on the former. 2) Format adherence, to assess the generation stability of LLMs. It\u2019s vital to have good format adherence ability for LLMs on downstream tasks so the output can be parsed successfully. We report the format-parsing error rate for this metric. Note that when calculating \u03ba, we use relaxed parsing rules to allow some minor uncertainty of aspect/label expressions. If a certain aspect can still not be parsed correctly, this aspect is treated as \"unmentioned\". The definition of \u03ba, Kappa weight matrix, and format-parsing rules can be seen in Appendix A.1. 4.2 Experimental Results on Each Option We report and analyze the results from two perspectives\u2014sentiment analysis performances, and format adherence abilities. 4.2.1 Sentiment Analysis Performance We first assess the sentiment analysis performances of LLMs using different sample design options. The comparative results of ID and OOD tasks on 3 Chat-LLMs and 3 Base-LLMs are plotted in Figure 4 (full results see Table 3 to Table 8 in Appendix A.4). Some shared and intriguing patterns are revealed from the results. Conclusions for Input Options: 1) Instructions enhance DT performances: The No-Inst option leads to poorer performance in ID tasks and a lack of OOD generalization ability compared to Inst-first or Inst-last methods that incorporate instructions. This underlines the critical role of including instructions for improving both understanding and generalizability of LLMs. 2) Better to place instruction first: The Inst-first method outperforms Inst-last across both ID and OOD tasks for different LLMs. This demonstrates the significance of instruction placement for LLMs\u2019 tuning process. We hypothesize that this may partly be explained by the attention mechanism, see Appendix A.6. 3) Modeling input detracts from performance: Employing the MI approach results in worse outcomes compared to the No-MI baselines across various models and tasks. This indicates that modeling the input part during fine-tuning may hinder the LLM\u2019s effectiveness, suggesting a cautious approach to what aspects of the task are modeled. Conclusions for Output Options: 1) Lines is a reliable output format for multiple predictions: The Lines format, positioned between the Natural and JSON formats, demonstrates stable and high performance in sentiment analysis across various models and tasks. Its effectiveness lies in intern (ID) bc2 (ID) c-llama2 (ID) intern(OOD) bc2 (OOD) c-llama2 (OOD) 0 1 2 3 4 5 6 7 Format Error Rate Input Options Options Inst-last, No-MI Inst-first, * No-inst, * *, MI intern (ID) bc2 (ID) c-llama2 (ID) intern(OOD) bc2 (OOD) c-llama2 (OOD) 0 1 2 3 4 5 6 7 8 Format Error Rate Output Options Options Natural, TxtLabel, PU Lines, *, * JSON, *, * *, NumLabel, * *, *, OU intern (ID) bc2 (ID) c-llama2 (ID) intern(OOD) bc2 (OOD) c-llama2 (OOD) 0 5 10 15 20 25 Format Error Rate Reasoning Options Options No-CoT, JSON CoT, * R-CoT, * Figure 5: Format adherence performance, measured by parsing error rates (%). \u2019*\u2019 means same option as above. offering structured information while retaining natural language readability, making it versatile for different LLMs. 2) Base-LLMs exhibit similar patterns while Chat-LLMs diverse: Base models respond similarly to output formats, indicating consistency in their responses. In contrast, Chat models, such as bc2-chat and cllama2-chat, exhibit varied performances, suggesting differences in their SFT or RLHF data\u2019s structure. For instance, bc2-chat and cllama2-chat perform well with JSON format, unlike intern-chat, implying a variance in the amount of structured data used in training. 3) Base-LLMs favor more natural formats while Chat-LLMs can fit or bear more sophisticated formats: Base models prefer Natural and Lines over JSON. Conversely, Chat models lean towards structured formats, with Lines and JSON. This divergence hints at the different training backgrounds, with Chat models being more accommodating to sophisticated data formats. One more piece of evidence is that the NumLabel option brings much more damage to the Base models than to the Chat models, which is less natural than TxtLabel. 4) Textual over numeric labels: Switching from textual to numeric labels worsens performance, likely because numeric labels lack the descriptive depth and context clues that textual labels provide, crucial for LLMs trained on natural language text. 5) Omitting the unmentioned targets may not be a good choice: While the OU option, which excludes unmentioned aspects, might seem to simplify outputs, it also introduces format inconsistency. This lack of uniformity forces the model to adapt to varied aspect mentions per sample, increasing task complexity with dynamic adjustment of the output format. Instead, the PU option keeps a consistent output format by adding placeholders, perhaps making LLMs easier to learn. Additional analysis shows that the aspects with a higher degree of unmentioning suffer greater underperformance with OU compared to PU, see Appendix A.7. Conclusions for Reasoning Options: 1) Subtle impact of CoT on ID, while significant on OOD tasks: CoT design marginally affects ID tasks but markedly improves OOD performance. This contrast highlights CoT\u2019s role in enhancing model reasoning and adaptability in unfamiliar contexts, underpinning its value for generalization. 2) \"Think before predict\" beats \"predict then explain\": When the reasoning step is placed after predicting, like the R-CoT method, the performance does not match that of the standard CoT approach. However, R-CoT can still outperform No-CoT in many cases, suggesting that a single reasoning component is also beneficial. 4.2.2 Format Adherence Performance Figure 5 presents the results of the format adherence performances for Chat-LLMs, from which we find that: 1) While the Inst-first method improves sentiment analysis, it shows less stability in format adherence, especially in OOD scenarios, indicating that leading with instructions might increase format errors with unfamiliar content; 2) Structured design options lead to better format adherence abilities: A noticeable trend is that structured outputs, especially in the order JSON > Lines > Natural, have lower format error rates. JSON format, in particular, demonstrates strong adherence to the correct structure, highlighting a balance between output complexity and precision; 3) MI, NumLabel and CoT options can be quite unstable for certain LLMs, while other options are generally consistent across different models. In applications where stability is vital, these unstable options should be taken seriously; 4) Though improving the understanding or reasoning performances, CoT design puts LLMs at a higher risk of parsing failure for customized downstream tasks, underlining a tradeoff for this option. (a) Performance of different sample design strategies with increasing training sizes. (b) Robustness on decoding sampling randomness, training size = 500. (c) Robustness on instruction content variation, training size = 500. training size training size training size Figure 6: Comparison of different sample design strategies. Considering LLMs\u2019 format adherence alongside the understanding abilities is crucial for specialized downstream applications, suggesting a need for a balanced approach in industrial scenarios. 5 Experiments II: An Robust Integrated SDE Strategy Based on the experimental evidence from the previous section, we propose an empirically strong SDE strategy (termed as ES-SDE) using the wellperforming options: a combination of Inst-first, No-MI input designs and Lines, PU, TxtLabel output designs. We don\u2019t use the CoT design because of its high annotation cost and relatively unstable output. In this section, we conduct comprehensive experiments to validate its effectiveness across different downstream tasks, as well as the robustness against perturbations in instructions or generation. 5.1 Settings Tasks and datasets. To evaluate the effectiveness of ES-SDE, we conduct experiments on three challenging downstream tasks: \u2022 GENIA (Ohta et al., 2002). A nested named entity recognition (Nested-NER) dataset in the molecular biology domain, where ChatGPT (GPT-3.5) only achieves an F1 score of 50.89%, using 5-shot CoT reasoning (Han et al., 2023). \u2022 MAVEN (Wang et al., 2020). A general domain event detection (ED) dataset. Han et al. (2023) demonstrate that the performance of ChatGPT in ED tasks falls below expectations. We use the top10 event types in our experiments. \u2022 Review11. This is our self-collected Chinese MASA dataset that involves 11 aspects, more complicated than the MASA tasks in Section 4. Baselines. As a comparison to ES-SDE, we also propose an empirically weak SDE strategy (EWSDE), combining Inst-last, Natural, and OU, while keeping other options the same. We naturally hypothesize that EW-SDE should be weaker than ESSDE. Note that ES-SDE and EW-SDE are both evidence-based strategies according to the previous empirical results, therefore, we also set up a heuristic-based baseline, referring to the prompt designs from the study of Han et al. (2023), which are similar to a combination of Inst-first and OU options, with a \"lines-of-list\" output format. Examples of these strategies see Appendix 11. Models. For a more generalized evaluation, we utilize two new LLMs, instead of those used in Section 4. Considering the task language, the llama27b-chat (Touvron et al., 2023b) is used for GENIA and MAVEN and qwen1.5-4b-chat (Bai et al., 2023), a very latest LLM, is used for Review11. The training details are the same as Section 4. 5.2 Results Figure 6 reports the comparison between different sample design strategies, from different perspectives. Soft-match F1 scores (Han et al., 2023) are reported for GENIA and MAVEN, and \u03ba reported for Review 11. More detailed results see Appendix A.5. Several key conclusions can be observed: 1) ES-SDE maintains advantages across tasks and training sizes. Figure 6-(a) demonstrates a consistent trend that ES-SDE keeps its advantage as the training size increases from 500 to 4, 000. Notably, 500 ES-SDE samples worth \u223c2, 000 EW-SDE and heuristic samples in GENIA and Review11 tasks, indicating the high quality of ESSDE samples. 2) Stable on decoding randomness. By default, the model employs a greedy decoding strategy (no sampling). Figure 6-(b) shows the results when activating decoding sampling with varying random seeds. ES-SDE maintains exceptional stability across different seeds on three tasks. The adoption of decoding sampling tends to diminish the performances of both SW-SDE and heuristic strategies for GENIA and MAVEN, while ES-SDE gives stable performances. 3) Robust to instruction variation. For instructions about a specific task, we have various ways of expressing the same idea. Therefore, we validate the sensitivity of different strategies to different formulations of the instruction, by changing the common content to other formulations (examples in Appendix 12). As shown in Figure 6-(c), ES-SDE keeps its edge in different variations, showing its robustness to instruction content. Overall, ES-SDE represents a reliable and potent approach for the DT of LLMs, illustrating that\u2014through a careful SDE process, LLMs can achieve much higher performances in downstream tasks. Note that ES-SDE may not be the best strategy for all tasks. A detailed investigation into SDE across a broader spectrum of tasks and models could yield even more effective strategies. 6 Can PE guide SDE? An Additional Analysis Prompts are the key to understand models\u2019 innate qualities and capabilities. A good PE method often indicates some patterns that a LLM is more familiar with or excels in. A natural question is: can PE guide SDE? To answer this question, we craft zeroshot and ICL prompts according to different SDE options to evaluate their PE performances. Figure 7 reports the average rankings of SDE options and their corresponding prompts in the MASA ID tasks. Detailed results for each task see Appendix A.8. Input Options Output Options Reasoning Options SDE ICL Prompt PPL Prediction PPL Zero-shot Figure 7: Average rankings of the DT performances of SDE options and zero-shot/ICL/PPL rankings of their corresponding prompts. Results based on the MASA ID tasks across 6 LLMs. Our analysis revealed some consistent patterns: Inst-first is an effective choice for both PE and SDE; CoT improves performances for both PE and SDE evaluations. However, there are also many counterintuitive findings. For example, the OU option consistently harms DT performances according to our previous experiments, however, its corresponding prompts results in notably better zero-shot or ICL results for certain LLMs; Similarly, while the Natural option outperforms the Lines approach for base models in SDE, the reverse is true in zeroshot or ICL evaluations for models like c-llama2base and intern-base. Gonen et al. (2023) showed through a wide range of tasks that the lower that lower perplexity (PPL) generally leads to better prompt designs. Inspired by this, we also conduct PPL analysis on the ICL prompts/predictions corresponding to each SDE options. Interestingly, OUlike prompt gives the highest averaged PPL scores across all options, which seems to be contradictory that OU brings better zero-shot or ICL results. The JSON format surprisingly achieves rather low PPL scores, however its SDE performances are worse than Lines. These findings highlight a complex landscape where prompt design patterns do not always align with SDE effectiveness, underscoring the nuanced relationship between PE and SDE. 7 Conclusion & Future Work In this study, we introduce SDE as an effective method to enhance the downstream-tuning performances of LLMs. Through comprehensive ID and OOD experiments involving six LLMs, we demonstrate the effects of various sample design strategies, uncovering some interesting patterns that are consistent across different LLMs. Building on these findings, we develop the ES-SDE approach, which integrates the most effective options. Our experiments on three new tasks with two additional LLMs consistently show ES-SDE\u2019s superiority over baseline methods. Further analysis of the relationship between PE and SDE suggests that effective prompt designs do not necessarily translate to successful sample designs. This observation opens up avenues for more detailed investigations into the mechanisms of SDE in future research. 8 Limitations This research follows a two-step experimental approach. In the first step, we investigate the impact of each SDE option, the results are then used as evidence for the second step\u2014proposing an empirically strong SDE combination strategy. As an empirical study, this research is subject to certain limitations: 1. While we demonstrate that the experimental findings from the first phase are extendable to different downstream tasks, the applicability to other untested scenarios remains uncertain. For instance, although the Lines output design outperforms the JSON format in our current experiments, it is unclear if this advantage persists in more complex tasks with intricate structures. Future research will address these more challenging contexts; 2. With the rapid pace of advancements in LLMs, new and more sophisticated models are being introduced frequently. The models we used in our study were among the best open-source options available at the start of our research but have since been surpassed by newer releases. Although we assessed a total of 8 LLMs, including both base and chat variants, there remains a possibility that our findings may not be universally applicable to other models; 3. Combining different SDE options poses significant challenges, particularly without prior validation experiments such as those described in Section 4. The challenges are twofold. Firstly, unlike typical hyperparameters like learning rate or network layers, choosing different SDE options alters the training data itself, rendering traditional hyperparameter-tuning techniques such as Bayesian Optimization (Snoek et al., 2012) less practical. Secondly, evaluating LLMs on downstream tasks is both resource-intensive and costly, due to the need for customized task metrics, parsing rules, and high model inference costs. Therefore, developing a more efficient framework for SDE studies is a critical objective for future research." + }, + { + "url": "http://arxiv.org/abs/2310.13486v1", + "title": "Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning", + "abstract": "Finding the best way of adapting pre-trained language models to a task is a\nbig challenge in current NLP. Just like the previous generation of task-tuned\nmodels (TT), models that are adapted to tasks via in-context-learning (ICL) are\nrobust in some setups but not in others. Here, we present a detailed analysis\nof which design choices cause instabilities and inconsistencies in LLM\npredictions. First, we show how spurious correlations between input\ndistributions and labels -- a known issue in TT models -- form only a minor\nproblem for prompted models. Then, we engage in a systematic, holistic\nevaluation of different factors that have been found to influence predictions\nin a prompting setup. We test all possible combinations of a range of factors\non both vanilla and instruction-tuned (IT) LLMs of different scale and\nstatistically analyse the results to show which factors are the most\ninfluential, interactive or stable. Our results show which factors can be used\nwithout precautions and which should be avoided or handled with care in most\nsettings.", + "authors": "Lucas Weber, Elia Bruni, Dieuwke Hupkes", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.08691v2", + "title": "The Power of Scale for Parameter-Efficient Prompt Tuning", + "abstract": "In this work, we explore \"prompt tuning\", a simple yet effective mechanism\nfor learning \"soft prompts\" to condition frozen language models to perform\nspecific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft\nprompts are learned through backpropagation and can be tuned to incorporate\nsignal from any number of labeled examples. Our end-to-end learned approach\noutperforms GPT-3's \"few-shot\" learning by a large margin. More remarkably,\nthrough ablations on model size using T5, we show that prompt tuning becomes\nmore competitive with scale: as models exceed billions of parameters, our\nmethod \"closes the gap\" and matches the strong performance of model tuning\n(where all model weights are tuned). This finding is especially relevant in\nthat large models are costly to share and serve, and the ability to reuse one\nfrozen model for multiple downstream tasks can ease this burden. Our method can\nbe seen as a simplification of the recently proposed \"prefix tuning\" of Li and\nLiang (2021), and we provide a comparison to this and other similar approaches.\nFinally, we show that conditioning a frozen model with soft prompts confers\nbenefits in robustness to domain transfer, as compared to full model tuning.", + "authors": "Brian Lester, Rami Al-Rfou, Noah Constant", + "published": "2021-04-18", + "updated": "2021-09-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.10560v2", + "title": "Self-Instruct: Aligning Language Models with Self-Generated Instructions", + "abstract": "Large \"instruction-tuned\" language models (i.e., finetuned to respond to\ninstructions) have demonstrated a remarkable ability to generalize zero-shot to\nnew tasks. Nevertheless, they depend heavily on human-written instruction data\nthat is often limited in quantity, diversity, and creativity, therefore\nhindering the generality of the tuned model. We introduce Self-Instruct, a\nframework for improving the instruction-following capabilities of pretrained\nlanguage models by bootstrapping off their own generations. Our pipeline\ngenerates instructions, input, and output samples from a language model, then\nfilters invalid or similar ones before using them to finetune the original\nmodel. Applying our method to the vanilla GPT3, we demonstrate a 33% absolute\nimprovement over the original model on Super-NaturalInstructions, on par with\nthe performance of InstructGPT-001, which was trained with private user data\nand human annotations. For further evaluation, we curate a set of\nexpert-written instructions for novel tasks, and show through human evaluation\nthat tuning GPT3 with Self-Instruct outperforms using existing public\ninstruction datasets by a large margin, leaving only a 5% absolute gap behind\nInstructGPT-001. Self-Instruct provides an almost annotation-free method for\naligning pre-trained language models with instructions, and we release our\nlarge synthetic dataset to facilitate future studies on instruction tuning. Our\ncode and data are available at https://github.com/yizhongw/self-instruct.", + "authors": "Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi", + "published": "2022-12-20", + "updated": "2023-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2101.00190v1", + "title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation", + "abstract": "Fine-tuning is the de facto way to leverage large pretrained language models\nto perform downstream tasks. However, it modifies all the language model\nparameters and therefore necessitates storing a full copy for each task. In\nthis paper, we propose prefix-tuning, a lightweight alternative to fine-tuning\nfor natural language generation tasks, which keeps language model parameters\nfrozen, but optimizes a small continuous task-specific vector (called the\nprefix). Prefix-tuning draws inspiration from prompting, allowing subsequent\ntokens to attend to this prefix as if it were \"virtual tokens\". We apply\nprefix-tuning to GPT-2 for table-to-text generation and to BART for\nsummarization. We find that by learning only 0.1\\% of the parameters,\nprefix-tuning obtains comparable performance in the full data setting,\noutperforms fine-tuning in low-data settings, and extrapolates better to\nexamples with topics unseen during training.", + "authors": "Xiang Lisa Li, Percy Liang", + "published": "2021-01-01", + "updated": "2021-01-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.07927v1", + "title": "A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications", + "abstract": "Prompt engineering has emerged as an indispensable technique for extending\nthe capabilities of large language models (LLMs) and vision-language models\n(VLMs). This approach leverages task-specific instructions, known as prompts,\nto enhance model efficacy without modifying the core model parameters. Rather\nthan updating the model parameters, prompts allow seamless integration of\npre-trained models into downstream tasks by eliciting desired model behaviors\nsolely based on the given prompt. Prompts can be natural language instructions\nthat provide context to guide the model or learned vector representations that\nactivate relevant knowledge. This burgeoning field has enabled success across\nvarious applications, from question-answering to commonsense reasoning.\nHowever, there remains a lack of systematic organization and understanding of\nthe diverse prompt engineering methods and techniques. This survey paper\naddresses the gap by providing a structured overview of recent advancements in\nprompt engineering, categorized by application area. For each prompting\napproach, we provide a summary detailing the prompting methodology, its\napplications, the models involved, and the datasets utilized. We also delve\ninto the strengths and limitations of each approach and include a taxonomy\ndiagram and table summarizing datasets, models, and critical points of each\nprompting technique. This systematic analysis enables a better understanding of\nthis rapidly developing field and facilitates future research by illuminating\nopen challenges and opportunities for prompt engineering.", + "authors": "Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, Aman Chadha", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.HC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.08773v4", + "title": "Cross-Task Generalization via Natural Language Crowdsourcing Instructions", + "abstract": "Humans (e.g., crowdworkers) have a remarkable ability in solving different\ntasks, by simply reading textual instructions that define them and looking at a\nfew examples. Despite the success of the conventional supervised learning on\nindividual datasets, such models often struggle with generalization across\ntasks (e.g., a question-answering system cannot solve classification tasks). A\nlong-standing challenge in AI is to build a model that learns a new task by\nunderstanding the human-readable instructions that define it. To study this, we\nintroduce NATURAL INSTRUCTIONS, a dataset of 61 distinct tasks, their\nhuman-authored instructions, and 193k task instances (input-output pairs). The\ninstructions are obtained from crowdsourcing instructions used to create\nexisting NLP datasets and mapped to a unified schema. Using this meta-dataset,\nwe measure cross-task generalization by training models on seen tasks and\nmeasuring generalization to the remaining unseen ones. We adopt generative\npre-trained language models to encode task-specific instructions along with\ninput and generate task output. Our results indicate that models benefit from\ninstructions when evaluated in terms of generalization to unseen tasks (19%\nbetter for models utilizing instructions). These models, however, are far\nbehind an estimated performance upperbound indicating significant room for more\nprogress in this direction.", + "authors": "Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi", + "published": "2021-04-18", + "updated": "2022-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.01652v5", + "title": "Finetuned Language Models Are Zero-Shot Learners", + "abstract": "This paper explores a simple method for improving the zero-shot learning\nabilities of language models. We show that instruction tuning -- finetuning\nlanguage models on a collection of tasks described via instructions --\nsubstantially improves zero-shot performance on unseen tasks.\n We take a 137B parameter pretrained language model and instruction-tune it on\nover 60 NLP tasks verbalized via natural language instruction templates. We\nevaluate this instruction-tuned model, which we call FLAN, on unseen task\ntypes. FLAN substantially improves the performance of its unmodified\ncounterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we\nevaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE,\nBoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number\nof finetuning datasets, model scale, and natural language instructions are key\nto the success of instruction tuning.", + "authors": "Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le", + "published": "2021-09-03", + "updated": "2022-02-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2005.11401v4", + "title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", + "abstract": "Large pre-trained language models have been shown to store factual knowledge\nin their parameters, and achieve state-of-the-art results when fine-tuned on\ndownstream NLP tasks. However, their ability to access and precisely manipulate\nknowledge is still limited, and hence on knowledge-intensive tasks, their\nperformance lags behind task-specific architectures. Additionally, providing\nprovenance for their decisions and updating their world knowledge remain open\nresearch problems. Pre-trained models with a differentiable access mechanism to\nexplicit non-parametric memory can overcome this issue, but have so far been\nonly investigated for extractive downstream tasks. We explore a general-purpose\nfine-tuning recipe for retrieval-augmented generation (RAG) -- models which\ncombine pre-trained parametric and non-parametric memory for language\ngeneration. We introduce RAG models where the parametric memory is a\npre-trained seq2seq model and the non-parametric memory is a dense vector index\nof Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG\nformulations, one which conditions on the same retrieved passages across the\nwhole generated sequence, the other can use different passages per token. We\nfine-tune and evaluate our models on a wide range of knowledge-intensive NLP\ntasks and set the state-of-the-art on three open domain QA tasks, outperforming\nparametric seq2seq models and task-specific retrieve-and-extract architectures.\nFor language generation tasks, we find that RAG models generate more specific,\ndiverse and factual language than a state-of-the-art parametric-only seq2seq\nbaseline.", + "authors": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, Douwe Kiela", + "published": "2020-05-22", + "updated": "2021-04-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.10385v2", + "title": "GPT Understands, Too", + "abstract": "Prompting a pretrained language model with natural language patterns has been\nproved effective for natural language understanding (NLU). However, our\npreliminary study reveals that manual discrete prompts often lead to unstable\nperformance -- e.g., changing a single word in the prompt might result in\nsubstantial performance drop. We propose a novel method P-Tuning that employs\ntrainable continuous prompt embeddings in concatenation with discrete prompts.\nEmpirically, P-Tuning not only stabilizes training by minimizing the gap\nbetween various discrete prompts, but also improves performance by a sizeable\nmargin on a wide range of NLU tasks including LAMA and SuperGLUE. P-Tuning is\ngenerally effective for both frozen and tuned language models, under both the\nfully-supervised and few-shot settings.", + "authors": "Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang", + "published": "2021-03-18", + "updated": "2023-10-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.11760v7", + "title": "Large Language Models Understand and Can be Enhanced by Emotional Stimuli", + "abstract": "Emotional intelligence significantly impacts our daily behaviors and\ninteractions. Although Large Language Models (LLMs) are increasingly viewed as\na stride toward artificial general intelligence, exhibiting impressive\nperformance in numerous tasks, it is still uncertain if LLMs can genuinely\ngrasp psychological emotional stimuli. Understanding and responding to\nemotional cues gives humans a distinct advantage in problem-solving. In this\npaper, we take the first step towards exploring the ability of LLMs to\nunderstand emotional stimuli. To this end, we first conduct automatic\nexperiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,\nLlama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative\napplications that represent comprehensive evaluation scenarios. Our automatic\nexperiments show that LLMs have a grasp of emotional intelligence, and their\nperformance can be improved with emotional prompts (which we call\n\"EmotionPrompt\" that combines the original prompt with emotional stimuli),\ne.g., 8.00% relative performance improvement in Instruction Induction and 115%\nin BIG-Bench. In addition to those deterministic tasks that can be\nautomatically evaluated using existing metrics, we conducted a human study with\n106 participants to assess the quality of generative tasks using both vanilla\nand emotional prompts. Our human study results demonstrate that EmotionPrompt\nsignificantly boosts the performance of generative tasks (10.9% average\nimprovement in terms of performance, truthfulness, and responsibility metrics).\nWe provide an in-depth discussion regarding why EmotionPrompt works for LLMs\nand the factors that may influence its performance. We posit that EmotionPrompt\nheralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs\ninteraction.", + "authors": "Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie", + "published": "2023-07-14", + "updated": "2023-11-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.HC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.10385v2", + "title": "GPT Understands, Too", + "abstract": "Prompting a pretrained language model with natural language patterns has been\nproved effective for natural language understanding (NLU). However, our\npreliminary study reveals that manual discrete prompts often lead to unstable\nperformance -- e.g., changing a single word in the prompt might result in\nsubstantial performance drop. We propose a novel method P-Tuning that employs\ntrainable continuous prompt embeddings in concatenation with discrete prompts.\nEmpirically, P-Tuning not only stabilizes training by minimizing the gap\nbetween various discrete prompts, but also improves performance by a sizeable\nmargin on a wide range of NLU tasks including LAMA and SuperGLUE. P-Tuning is\ngenerally effective for both frozen and tuned language models, under both the\nfully-supervised and few-shot settings.", + "authors": "Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang", + "published": "2021-03-18", + "updated": "2023-10-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.07597v1", + "title": "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection", + "abstract": "The introduction of ChatGPT has garnered widespread attention in both\nacademic and industrial communities. ChatGPT is able to respond effectively to\na wide range of human questions, providing fluent and comprehensive answers\nthat significantly surpass previous public chatbots in terms of security and\nusefulness. On one hand, people are curious about how ChatGPT is able to\nachieve such strength and how far it is from human experts. On the other hand,\npeople are starting to worry about the potential negative impacts that large\nlanguage models (LLMs) like ChatGPT could have on society, such as fake news,\nplagiarism, and social security issues. In this work, we collected tens of\nthousands of comparison responses from both human experts and ChatGPT, with\nquestions ranging from open-domain, financial, medical, legal, and\npsychological areas. We call the collected dataset the Human ChatGPT Comparison\nCorpus (HC3). Based on the HC3 dataset, we study the characteristics of\nChatGPT's responses, the differences and gaps from human experts, and future\ndirections for LLMs. We conducted comprehensive human evaluations and\nlinguistic analyses of ChatGPT-generated content compared with that of humans,\nwhere many interesting results are revealed. After that, we conduct extensive\nexperiments on how to effectively detect whether a certain text is generated by\nChatGPT or humans. We build three different detection systems, explore several\nkey factors that influence their effectiveness, and evaluate them in different\nscenarios. The dataset, code, and models are all publicly available at\nhttps://github.com/Hello-SimpleAI/chatgpt-comparison-detection.", + "authors": "Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, Yupeng Wu", + "published": "2023-01-18", + "updated": "2023-01-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.13688v2", + "title": "The Flan Collection: Designing Data and Methods for Effective Instruction Tuning", + "abstract": "We study the design decisions of publicly available instruction tuning\nmethods, and break down the development of Flan 2022 (Chung et al., 2022).\nThrough careful ablation studies on the Flan Collection of tasks and methods,\nwe tease apart the effect of design decisions which enable Flan-T5 to\noutperform prior work by 3-17%+ across evaluation settings. We find task\nbalancing and enrichment techniques are overlooked but critical to effective\ninstruction tuning, and in particular, training with mixed prompt settings\n(zero-shot, few-shot, and chain-of-thought) actually yields stronger (2%+)\nperformance in all settings. In further experiments, we show Flan-T5 requires\nless finetuning to converge higher and faster than T5 on single downstream\ntasks, motivating instruction-tuned models as more computationally-efficient\nstarting checkpoints for new tasks. Finally, to accelerate research on\ninstruction tuning, we make the Flan 2022 collection of datasets, templates,\nand methods publicly available at\nhttps://github.com/google-research/FLAN/tree/main/flan/v2.", + "authors": "Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, Adam Roberts", + "published": "2023-01-31", + "updated": "2023-02-14", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.15949v1", + "title": "Sequence can Secretly Tell You What to Discard", + "abstract": "Large Language Models (LLMs), despite their impressive performance on a wide\nrange of tasks, require significant GPU memory and consume substantial\ncomputational resources. In addition to model weights, the memory occupied by\nKV cache increases linearly with sequence length, becoming a main bottleneck\nfor inference. In this paper, we introduce a novel approach for optimizing the\nKV cache which significantly reduces its memory footprint. Through a\ncomprehensive investigation, we find that on LLaMA2 series models, (i) the\nsimilarity between adjacent tokens' query vectors is remarkably high, and (ii)\ncurrent query's attention calculation can rely solely on the attention\ninformation of a small portion of the preceding queries. Based on these\nobservations, we propose CORM, a KV cache eviction policy that dynamically\nretains important key-value pairs for inference without finetuning the model.\nWe validate that CORM reduces the inference memory usage of KV cache by up to\n70% without noticeable performance degradation across six tasks in LongBench.", + "authors": "Jincheng Dai, Zhuowei Huang, Haiyun Jiang, Chen Chen, Deng Cai, Wei Bi, Shuming Shi", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Attention Let x \u2208Rn\u00d7d denote the input embeddings from a sequence of n feature vectors of dimension d. The multi-head self-attention [17], as a core module of Transformer model, facilitates 2Let t denote sequence length, we count the proportion of keys which attention score larger than average score 1 t and denote it as r. The larger r is, the sparser the layer is. 2 contextual information interaction within each head in the following manner: Q = xWq, K = xWk, V = xWv, Attention(x) = softmax(QKT \u221adh ) \u00d7 V (1) Q, K, V represent the query, key, and value matrices, which are obtained by linearly mapping x using weight matrices Wq, Wk, and Wv \u2208Rd\u00d7dh, respectively. dh is the dimension of each individual head. KV Cache According to autoregressive paradigm, transformer decoder model predicts future tokens based on both previous and current tokens. Recalculating the key-value pairs for previous tokens at each decoding step is clearly an inefficient strategy. A common practice is to retain the key-value pairs of previous tokens for subsequent reuse. Thus, the consumption of KV cache becomes linearly correlated with the length of input sequence. When dealing with long contexts, however, the use of such a space-time trade-off approach may still pose challenges. Training Policies The advent of multi-query attention (MQA) [8] is to address the influence of attention heads on KV cache within multi-head attention (MHA) mechanism. It facilitates the sharing of the same set of keys and values among different heads to alleviate cache pressure. Grouped-query attention (GQA) [9] represents a trade-off between MHA and MQA, achieving key-value sharing within each group through mean-pooling-based uptraining. Both methods require additional training to restore model performance due to the inability to directly convert. Training-free Policies During generation, sequence length is the primary factor of cache pressure. Recent methods aim to balance model efficiency and inference cost without extra training and architectural changes. StreamingLLM [10] keeps attention sink token and recent tokens throughout decoding process to align with the training window. Scissorhands [11] maintains pivotal tokens and recent tokens based on the persistence of importance hypothesis. H2O [12] utilizes accumulated attention score to maintain heavy hitters and recent tokens. TOVA [13] removes tokens with the lowest current attention score from the fixed cache at each decoding step. RoCo [14] retains tokens in the fixed cache based on high mean cumulative attention scores and top r standard deviations. Aforementioned methods consistently operate on a fixed cache, ignoring that the number of tokens playing an important role may vary across different attention heads and layers.", + "pre_questions": [], + "main_content": "Introduction Large language models (LLMs) have demonstrated impressive proficiency in a wide range of natural language processing tasks such as question answering, summarization and multi-turn dialogues [1\u20133]. Considering substantial cost of deploying LLMs introduced by tremendous model size and quadratic cost of attention layer, many works focused on model compression and memory-efficient attention techniques [4\u20137]. However, the size of KV cache, which stores previous tokens\u2019 key and value states to avoid re-computation, scaling linearly with sequence length during generation, also incurs significant overhead. For instance, even a 7 billion-parameter model with batch size of 128 and sequence length of 4096 results in 256GB of KV cache, far exceeds memory consumed by model itself which is only 14GB. A natural idea is to discard some less informative KV cache to reduce memory consumption. The challenge lies in finding a balance between discarding as much as possible while still maintaining model performance. Despite multi-query attention [8] and grouped-query attention [9] can reduce the size of KV cache by reducing attention heads, it needs re-training to recover performance of original model. Recent works \u2217Corresponding Author Preprint. In progress. arXiv:2404.15949v1 [cs.CL] 24 Apr 2024 [10\u201314] have investigated implementing KV cache using specific eviction policy, that determines which key-value states should be evicted from KV cache. These methods aim to compress KV cache to a pre-defined budget size, thereby reducing memory and computational overhead. However, they save same number of key-value pairs for all attention heads and layers, ignoring that the number of keys playing an important role may vary across different attention heads and layers [15]. (a) (b) Figure 1: Attention sparsity of LLaMA2-7B. (a) Layer-wise attention sparsity. (b) Head-wise attention sparsity of layer 0 and layer 1. Intuitively, if important information in the KV cache exceeds the predetermined budget size, the performance of the model is likely to decline as it unavoidably evicts some crucial information. Our preliminary exploration also reveals that different attention layers and heads show different sparsities as shown in Figure 1. First, we observe that bottom layers of the model are relatively dense2, while the remaining attention layers exhibit significant sparsity. Second, even within the same layer, different heads can exhibit obvious differences in sparsity levels. These properties suggest that we need to treat different layers and heads differently, rather than using the same budget size for all of them. In addition, we prove that completely similar queries have similar concerns about keys, and observe that recent query vectors are quite similar on LLaMA2 series models so current query can directly use recent query attention messages during generation. Based on the above insights, we first define the generation process of LLMs with a budget-unrestricted KV cache in Section 3. Then we propose Cache Optimization with Recent Message (CORM), a framework that exploits recent query attention information for KV cache optimization and token generation of LLMs. Specifically, \u2022 In Section 3, we explore the similarity between query vectors of all tokens within same sequence, revealing that recent query vectors are highly similar, which implies that (i) keys that are important for recent queries might be also important for the current query; and (ii) removing key-value pairs that appear to be less informative for recent queries can greatly preserve the performance of the model. \u2022 In Section 4, we present a simple method which dynamically evicts minor key-value pairs determined by recent tokens\u2019 attention information. We conduct extensive experiments on LLaMA2-7B-Chat, considering its popularity and wide usage, to evaluate CORM across 6 tasks from LongBench [16] containing question answering, summarization, code completion, etc. Experiments show that even without explicitly setting a budget size, our method is still possible to achieve a high compression rate. Our method achieves better performance compared to StreamingLLM [10], Scissorhands [11] and H2O [12] with over 70% KV cache reduction rate and can even come close to fully restoring the performance of the model. We first demonstrate the existence of attention sparsity in LLMs in Section 3.1, then discuss the phenomenon that similar queries have similar attention concerns for keys in Section 3.2. In Section 3.3, we show an intriguing observation that current query is most similar to recent queries. 3.1 Attention sparsity in LLMs We first explore the sparsity in attention layers of LLMs, which provides an effective guarantee for us to reduce KV cache size. Specifically, we use proportion of important keys to represent attention sparsity. Let qt \u2208R1\u00d7d denote the query state vector at step t, ki \u2208R1\u00d7d denote the key state vector at step i (1 \u2264i \u2264t), where d is hidden dimension (for the sake of simplicity, we only consider a single head here). The normalized attention score of qt for ki is computed as: \u03b1t,i = exp(qtkT i / \u221a \ufffdt exp(qkT / exp(qtkT i / \u221a d) \ufffdt j=1 exp(qtkT j / \u221a ne a key ki is consid d) . (2) \ufffd Definition 3.1 (Important Key) We define a key ki is considered important in step t, if and only if \u03b1t,i \u22651 t , otherwise it is considered minor. ion 3.1 (Important Key) We defin 1 t , otherwise it is considered minor. 3 We conduct zero-shot inference with LLaMA2-7B model on the test set of PG-19 [18]. We plot the layer-wise and head-wise sparsity within attention blocks, the results are presented in Figure 1. It reveals that bottom layers are relatively dense, while other layers are highly sparse with over 90% sparsity. This makes it possible to do attention computation on only small part of KV cache during generation. 3.2 Similar queries have similar concerns for keys The previous section reveals the existence of attention sparsity in LLMs, which provides an opportunity to reduce KV cache size while maintaining performance. In this section we give a theoretical analysis that similar queries have similar concerns for keys for eviction policy design. Consider the i-th and j-th query state vectors qi and qj in a sequence of token length T (i < j \u2264T). Their cosine similarity can be computed as: cosine_similarity(qi, qj) = qiqT j \u2225qi\u2225\u00b7 \u2225qj\u2225. (3) Consider all key states k1, k2, ..., ki\u22121 before i-th key. Assume that cosine_similarity(qi, qj) = 1, then qi = m \u00b7 qj with m \u2208R+. The attention weight3 of qi to the previous i \u22121 keys can be represented as: attention_weight = 1 \u221a d (qikT 1 , qikT 2 , ..., qikT i\u22121) = m \u221a d \u00b7 (qjkT 1 , qjkT 2 , ..., qjkT i\u22121). (4) Note that m is a positive number that does not affect the relative order of the attention weights. For example, for qi, if qikT 1 > qikT 2 , there must be qjkT 1 > qjkT 2 for qj. This means if a key is important to qi, it is also important to qj, though the degree of importance may vary due to the softmax function. Figure 2: Similar queries have similar concerns for keys. We plot the attention map from two different layers in a sentence. We discretize the attention score and those important keys are shown in bright green. Each attention map has two red borders, the bottom border shows important keys that current query actually focuses on, while another border shows important keys that the most similar query focuses on. Although it\u2019s nearly impossible that cosine_similarity(qi, qj) = 1 in real situation, we can make the hypothesis that two similar queries may have similar concerns for keys. To validate this hypothesis, we provide two attention maps of a sentence randomly drawn from PG-19 using LLaMA2-7B, as 3attention weight is unnormalized attention score 4 shown in Figure 2. Important keys are marked with bright green, more plots are available in Appendix A.1. We observe that the hypothesis is true, and similar queries exhibit similar concerns for important keys. At the same time, important keys only account for a small proportion especially in deeper attention layers, which is consistent with the finding that deeper layers are sparser in previous section. 3.3 Similarity exploration of query vectors We have validated two similar queries have similar concerns for keys in Section 3.2, we also need to validate that at each step we can find a previous query state that is similar enough to current query state in same layer and same head. To check this, we visualize cosine similarity of query vectors within same sequence as shown in Figure 3, more plots are available in Appendix A.2. We observe an intriguing phenomenon that many images show clear oblique color segmentation, with the top oblique block closest to dark red which means current query is most similar to recent queries. Figure 3: Visualization of query vectors\u2019 cosine similarity over one sentence with a length of 1024. The i-th row of the map represents cosine similarity of the i-th query to all previous queries. The plot reveals that in most cases current query is most similar to recent queries. Through above observations, we see an opportunity to design a KV cache eviction policy based on query similarity that preserves the LLM generation performance. 4 Cache Optimization with Recent Message In this section, we present CORM, a method reduces the KV cache memory based on recent query attention information without any fine-tuning process. In Section 4.1, we derive that current query can directly use recent query attention messages during generation. In Section 4.2, we present CORM eviction policy and describe how it works during generation. 4.1 Generate based on recent query attention messages Consider observations in Section 3, intuitively, we can directly store all queries and their attention information for future reference. At each generation step, use current query to find the most similar one from previous queries, and use its saved attention information to calculate solely on important keys. However, this approach incurs a significant cost. First, storing all queries results in a substantial increase in memory overhead. Second, the requirement of performing similarity calculations at each step adds to the computational overhead. Since in most cases current query is most similar to recent queries as described in Section 3.3, we can just use recent query attention messages. And from Figure 2 we can also observe that only a small proportion of keys are considered important by recent queries. Therefore even if we save all the keys that are considered important in previous steps, we can save a lot of memory. 4.2 Eviction algorithm via recent message We have shown recent query attention information is enough for cache optimization in Section 4.1. In the following, we formally define this algorithm and introduce how to integrate it into LLM generation directly. Definition 4.1 (Long-term Minor Key) A key ki is considered as long-term minor key only if it is considered minor in all recent r steps (from t \u2212r + 1 to t). 5 Approach CORM will have a recent window of size w to record the information of recent w queries, and will always keep recent r keys unremoved to prevent them from being discarded prematurely due to insufficient observations. During generation, ki, vi will be discarded once ki is regarded as long-term minor key. For better explanation we present pytorch code4 of main algorithm in Algorithm 1. Intuitively, when w is larger, more keys and values will be saved, the compression rate will be smaller and performance will be better; Conversely, when w is smaller, fewer keys and values will be saved, the compression rate will be larger and performance will be worse. So there\u2019s a tradeoff between performance and compress rate. Memory Overhead Analysis In order to reduce memory overhead of KV cache, an extra memory overhead is introduced by recent information cache. We need to store recent query messages which increase memory overhead. However, these overheads are far less than compressed KV cache, one can use a small portion of memory to avoid maintaining full KV cache memory without obvious performance degradation. On the other hand, the compression rate will increase as the sequence length increases as shown in Figure 4, resulting in a lower memory overhead for this component in comparison. Algorithm 1 Single-head KV cache eviction with CORM (unbatched) def corm_eviction(keys, values, message, attn_score, w, r, t): \"\"\" Args: keys: previous key states, a tensor with shape [l, d] values: previous value states, a tensor with shape [l, d] message: attention message, a tensor with shape of [m, l-1] attn_score: current steps attention score, a tensor with shape of [1, l] w: window size, a scalar r: recent size, a scalar t: current step, a scalar Returns: updated_keys: updated keys updated_values: updated values updated_message: updated message \"\"\" m = message.shape[0] # update attention message message = torch.cat([message, torch.zeros(m, 1)], dim=1) \u25b7pad to [m, l] cur_message = attn_score >= 1 / t message = torch.cat([message, cur_message], dim=1)[-w:, :] if message.shape[0] < w: return keys, values, message else: # determine the key-value pairs that necessitate discarding decision = message.any(dim=0) decision[-r:] = True \u25b7always keep recent r tokens unremoved indices = torch.nonzero(decision).squeeze() keys = keys[indices, :] values = values[indices, :] return keys, values, message 4For the sake of brevity, the code snippet only demonstrates single-head eviction operation, while in the actual implementation, it will be performed on each head at every layer. 6 5 Empirical Evaluation In this section, we present the results that demonstrate CORM can reduce up to 70% of the memory footprint of KV Cache without accuracy degradation on LLaMA2-7B-Chat. Dataset To broadly validate feasibility of our method on real-world use cases, we choose LongBench [16] as our evaluation benchmark, which contains a wide range of long-text tasks such as question answering [19\u201324], summarization [25\u201328], few-shot learning [29\u201332], synthetic task and code completion [33, 34]. Here we do not consider short text tasks, because even full cache doesn\u2019t have any bottlenecks. Models Since sequence length is the main factor in the continuous growth of KV Cache, we employ LLaMA2-7B-Chat [2] for 4K test considering its wide usage. Baselines Since CORM reduces KV cache without need for training, we consider several similar approaches as our baselines: StreamLLM [10], Scissorhands [11] and H2O [12]. In addition, the full KV cache is also considered as strong baseline to measure the performance loss of other methods. Setting All baselines can be regarded as fixed budget size KV cache compression, however CORM is a dynamic compression method. Since we find that CORM has similar compression rates for various task texts with the same sequence length. For fair comparison, we plot the relationship between model compression rate and sequence length using texts randomly sampled from PG19 [18] as shown in Figure 4. Figure 4: Relationship between compression ratio and sequence length. Plots show that compression rate with CORM \"256+256\" and budget=1024 are close for LLaMA2-7B-Chat. Main Results We evaluate LLaMA2-7B-Chat for 4K length text. Results are summarized in Table 1 & 2 for LLaMA2-7B-Chat. The following observations can be drawn: (1) CORM consistently outperforms previous methods at the same compression rate across a wide range of tasks. (2) Meanwhile, with over 70% KV cache reduction, CORM achieves comparable performance as the model with full KV cache and even surpass it on some tasks, we speculate it\u2019s because there\u2019s some noise in full KV cache that affects model output and our method can eliminate this noise to a certain extent by discarding some KV cache. 5.1 Budget unnecessity: is unbudgeted better? We primarily focus on the effectiveness of not setting a budget versus setting a fixed budget. Note that since we use same window size and recent size as Scissorhands in the experiment, it can be regarded a natural ablation experiment. And Table 1 & 2 have shown that, at the similar compression rate, CORM is much better than Scissorhands in most tasks, and performance of other tasks is close. This verifies that different transformer layers and heads should be treated differently rather than setting a same fixed budget size. 7 Table 1: Results (%) on single-doc QA, multi-doc QA and summarization tasks. \"Full\" refers to LLaMA2-7B-Chat utilizing full KV Cache, \"StreamLLM\" is configured with 4+1020, \"Scissorhands\" is configured with 768+256 where window size=256, \"H2O\" is configured with 768+256, \"CORM\" is configured with 256+256 for fair comparison. For the sake of brevity we use ID to denote dataset here, mapping from ID to dataset can be found in Appendix B . Method Single-Doc QA Multi-Doc QA Summarization 1-1 1-2 1-3 1-4 2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 Full 19.0 22.1 36.7 11.8 27.8 31.5 8.3 6.8 26.8 20.7 26.2 0.2 StreamLLM 13.2 15.4 27.2 6.5 24.2 25.4 5.3 4.4 21.6 19.8 24.4 0.1 Scissorhands 16.6 18.7 32.4 9.9 26.3 32.1 8.9 5.7 22.1 20.7 25.4 0.2 H2O 17.9 19.5 34.9 11.5 27.5 29.7 7.5 7.1 24.5 21.0 25.8 0.2 CORM 18.9 22.2 38.6 12.0 27.6 31.6 8.4 7.1 26.4 21.0 25.8 0.2 Table 2: Results (%) on few-shot learning, synthetic, and code tasks. \"Overall\" is computed by the macro-average over major task categories. This is computed on English (EN) tasks, Chinese (ZH) tasks, and all (All) tasks, code tasks are included in both languages. Method Few-shot Learning Synthetic Code Overall 4-1 4-2 4-3 4-4 5-1 5-2 5-3 6-1 6-2 EN ZN All Full 64.0 83.3 41.4 17.3 2.9 7.8 10.0 58.3 52.2 32.8 16.9 28.9 StreamLLM 61.0 82.9 39.1 14.5 1.8 4.7 6.5 57.6 50.0 29.5 14.3 25.7 Scissorhands 52.5 83.6 40.7 17.0 3.1 6.5 7.7 56.8 52.1 31.0 15.8 27.2 H2O 63.0 81.5 39.9 17.0 2.8 7.0 7.3 57.8 52.3 31.8 16.4 28.0 CORM 64.0 83.5 41.3 17.3 2.9 9.0 9.1 58.3 52.0 32.9 16.8 28.9 6 Conclusion In this paper, we investigate a critical memory bottleneck in LLM deployment, KV cache. Inspired by similar queries have similar concerns for keys and recent queries are similar enough, we propose CORM, an unbudgeted KV cache eviction policy for significantly reducing its memory footprint, by reusing recent query attention messages. Through extensive evaluations, we demonstrate that CORM can reduce the inference memory usage of the KV cache by up to 70% without noticeable performance degradation across a variety of tasks. 8" + }, + { + "url": "http://arxiv.org/abs/2309.17453v4", + "title": "Efficient Streaming Language Models with Attention Sinks", + "abstract": "Deploying Large Language Models (LLMs) in streaming applications such as\nmulti-round dialogue, where long interactions are expected, is urgently needed\nbut poses two major challenges. Firstly, during the decoding stage, caching\nprevious tokens' Key and Value states (KV) consumes extensive memory. Secondly,\npopular LLMs cannot generalize to longer texts than the training sequence\nlength. Window attention, where only the most recent KVs are cached, is a\nnatural approach -- but we show that it fails when the text length surpasses\nthe cache size. We observe an interesting phenomenon, namely attention sink,\nthat keeping the KV of initial tokens will largely recover the performance of\nwindow attention. In this paper, we first demonstrate that the emergence of\nattention sink is due to the strong attention scores towards initial tokens as\na \"sink\" even if they are not semantically important. Based on the above\nanalysis, we introduce StreamingLLM, an efficient framework that enables LLMs\ntrained with a finite length attention window to generalize to infinite\nsequence lengths without any fine-tuning. We show that StreamingLLM can enable\nLlama-2, MPT, Falcon, and Pythia to perform stable and efficient language\nmodeling with up to 4 million tokens and more. In addition, we discover that\nadding a placeholder token as a dedicated attention sink during pre-training\ncan further improve streaming deployment. In streaming settings, StreamingLLM\noutperforms the sliding window recomputation baseline by up to 22.2x speedup.\nCode and datasets are provided at https://github.com/mit-han-lab/streaming-llm.", + "authors": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis", + "published": "2023-09-29", + "updated": "2024-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.17118v2", + "title": "Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time", + "abstract": "Large language models(LLMs) have sparked a new wave of exciting AI\napplications. Hosting these models at scale requires significant memory\nresources. One crucial memory bottleneck for the deployment stems from the\ncontext window. It is commonly recognized that model weights are memory hungry;\nhowever, the size of key-value embedding stored during the generation process\n(KV cache) can easily surpass the model size. The enormous size of the KV cache\nputs constraints on the inference batch size, which is crucial for high\nthroughput inference workload. Inspired by an interesting observation of the\nattention scores, we hypothesize the persistence of importance: only pivotal\ntokens, which had a substantial influence at one step, will significantly\ninfluence future generations. Based on our empirical verification and\ntheoretical analysis around this hypothesis, we propose Scissorhands, a system\nthat maintains the memory usage of the KV cache at a fixed budget without\nfinetuning the model. In essence, Scissorhands manages the KV cache by storing\nthe pivotal tokens with a higher probability. We validate that Scissorhands\nreduces the inference memory usage of the KV cache by up to 5X without\ncompromising model quality. We further demonstrate that Scissorhands can be\ncombined with 4-bit quantization, traditionally used to compress model weights,\nto achieve up to 20X compression.", + "authors": "Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, Anshumali Shrivastava", + "published": "2023-05-26", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1706.03762v7", + "title": "Attention Is All You Need", + "abstract": "The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training data.", + "authors": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin", + "published": "2017-06-12", + "updated": "2023-08-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.06262v2", + "title": "On the Efficacy of Eviction Policy for Key-Value Constrained Generative Language Model Inference", + "abstract": "Despite the recent success associated with Large Language Models (LLMs), they\nare notably cost-prohibitive to deploy in resource-constrained environments due\nto their excessive memory and computational demands. In addition to model\nparameters, the key-value cache is also stored in GPU memory, growing linearly\nwith batch size and sequence length. As a remedy, recent works have proposed\nvarious eviction policies for maintaining the overhead of key-value cache under\na given budget. This paper embarks on the efficacy of existing eviction\npolicies in terms of importance score calculation and eviction scope\nconstruction. We identify the deficiency of prior policies in these two aspects\nand introduce RoCo, a robust cache omission policy based on temporal attention\nscores and robustness measures. Extensive experimentation spanning prefilling\nand auto-regressive decoding stages validates the superiority of RoCo. Finally,\nwe release EasyKV, a versatile software package dedicated to user-friendly\nkey-value constrained generative inference. Code available at\nhttps://github.com/DRSY/EasyKV.", + "authors": "Siyu Ren, Kenny Q. Zhu", + "published": "2024-02-09", + "updated": "2024-02-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.02150v1", + "title": "Fast Transformer Decoding: One Write-Head is All You Need", + "abstract": "Multi-head attention layers, as used in the Transformer neural sequence\nmodel, are a powerful alternative to RNNs for moving information across and\nbetween sequences. While training these layers is generally fast and simple,\ndue to parallelizability across the length of the sequence, incremental\ninference (where such paralleization is impossible) is often slow, due to the\nmemory-bandwidth cost of repeatedly loading the large \"keys\" and \"values\"\ntensors. We propose a variant called multi-query attention, where the keys and\nvalues are shared across all of the different attention \"heads\", greatly\nreducing the size of these tensors and hence the memory bandwidth requirements\nof incremental decoding. We verify experimentally that the resulting models can\nindeed be much faster to decode, and incur only minor quality degradation from\nthe baseline.", + "authors": "Noam Shazeer", + "published": "2019-11-06", + "updated": "2019-11-06", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.14048v3", + "title": "H$_2$O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models", + "abstract": "Large Language Models (LLMs), despite their recent impressive\naccomplishments, are notably cost-prohibitive to deploy, particularly for\napplications involving long-content generation, such as dialogue systems and\nstory writing. Often, a large amount of transient state information, referred\nto as the KV cache, is stored in GPU memory in addition to model parameters,\nscaling linearly with the sequence length and batch size. In this paper, we\nintroduce a novel approach for implementing the KV cache which significantly\nreduces its memory footprint. Our approach is based on the noteworthy\nobservation that a small portion of tokens contributes most of the value when\ncomputing attention scores. We call these tokens Heavy Hitters (H$_2$). Through\na comprehensive investigation, we find that (i) the emergence of H$_2$ is\nnatural and strongly correlates with the frequent co-occurrence of tokens in\nthe text, and (ii) removing them results in significant performance\ndegradation. Based on these insights, we propose Heavy Hitter Oracle (H$_2$O),\na KV cache eviction policy that dynamically retains a balance of recent and\nH$_2$ tokens. We formulate the KV cache eviction as a dynamic submodular\nproblem and prove (under mild assumptions) a theoretical guarantee for our\nnovel eviction algorithm which could help guide future work. We validate the\naccuracy of our algorithm with OPT, LLaMA, and GPT-NeoX across a wide range of\ntasks. Our implementation of H$_2$O with 20% heavy hitters improves the\nthroughput over three leading inference systems DeepSpeed Zero-Inference,\nHugging Face Accelerate, and FlexGen by up to 29$\\times$, 29$\\times$, and\n3$\\times$ on OPT-6.7B and OPT-30B. With the same batch size, H2O can reduce the\nlatency by up to 1.9$\\times$. The code is available at\nhttps://github.com/FMInference/H2O.", + "authors": "Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher R\u00e9, Clark Barrett, Zhangyang Wang, Beidi Chen", + "published": "2023-06-24", + "updated": "2023-12-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.13245v3", + "title": "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints", + "abstract": "Multi-query attention (MQA), which only uses a single key-value head,\ndrastically speeds up decoder inference. However, MQA can lead to quality\ndegradation, and moreover it may not be desirable to train a separate model\njust for faster inference. We (1) propose a recipe for uptraining existing\nmulti-head language model checkpoints into models with MQA using 5% of original\npre-training compute, and (2) introduce grouped-query attention (GQA), a\ngeneralization of multi-query attention which uses an intermediate (more than\none, less than number of query heads) number of key-value heads. We show that\nuptrained GQA achieves quality close to multi-head attention with comparable\nspeed to MQA.", + "authors": "Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebr\u00f3n, Sumit Sanghai", + "published": "2023-05-22", + "updated": "2023-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.06104v1", + "title": "Transformers are Multi-State RNNs", + "abstract": "Transformers are considered conceptually different compared to the previous\ngeneration of state-of-the-art NLP models - recurrent neural networks (RNNs).\nIn this work, we demonstrate that decoder-only transformers can in fact be\nconceptualized as infinite multi-state RNNs - an RNN variant with unlimited\nhidden state size. We further show that pretrained transformers can be\nconverted into $\\textit{finite}$ multi-state RNNs by fixing the size of their\nhidden state. We observe that several existing transformers cache compression\ntechniques can be framed as such conversion policies, and introduce a novel\npolicy, TOVA, which is simpler compared to these policies. Our experiments with\nseveral long range tasks indicate that TOVA outperforms all other baseline\npolicies, while being nearly on par with the full (infinite) model, and using\nin some cases only $\\frac{1}{8}$ of the original cache size. Our results\nindicate that transformer decoder LLMs often behave in practice as RNNs. They\nalso lay out the option of mitigating one of their most painful computational\nbottlenecks - the size of their cache memory. We publicly release our code at\nhttps://github.com/schwartz-lab-NLP/TOVA.", + "authors": "Matanel Oren, Michael Hassid, Yossi Adi, Roy Schwartz", + "published": "2024-01-11", + "updated": "2024-01-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + } + ] + ] + }, + { + "url": "http://arxiv.org/abs/2203.11171v4", + "title": "Self-Consistency Improves Chain of Thought Reasoning in Language Models", + "abstract": "Chain-of-thought prompting combined with pre-trained large language models\nhas achieved encouraging results on complex reasoning tasks. In this paper, we\npropose a new decoding strategy, self-consistency, to replace the naive greedy\ndecoding used in chain-of-thought prompting. It first samples a diverse set of\nreasoning paths instead of only taking the greedy one, and then selects the\nmost consistent answer by marginalizing out the sampled reasoning paths.\nSelf-consistency leverages the intuition that a complex reasoning problem\ntypically admits multiple different ways of thinking leading to its unique\ncorrect answer. Our extensive empirical evaluation shows that self-consistency\nboosts the performance of chain-of-thought prompting with a striking margin on\na range of popular arithmetic and commonsense reasoning benchmarks, including\nGSM8K (+17.9%), SVAMP (+11.0%), AQuA (+12.2%), StrategyQA (+6.4%) and\nARC-challenge (+3.9%).", + "authors": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou", + "published": "2022-03-21", + "updated": "2023-03-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.11610v2", + "title": "Large Language Models Can Self-Improve", + "abstract": "Large Language Models (LLMs) have achieved excellent performances in various\ntasks. However, fine-tuning an LLM requires extensive supervision. Human, on\nthe other hand, may improve their reasoning abilities by self-thinking without\nexternal inputs. In this work, we demonstrate that an LLM is also capable of\nself-improving with only unlabeled datasets. We use a pre-trained LLM to\ngenerate \"high-confidence\" rationale-augmented answers for unlabeled questions\nusing Chain-of-Thought prompting and self-consistency, and fine-tune the LLM\nusing those self-generated solutions as target outputs. We show that our\napproach improves the general reasoning ability of a 540B-parameter LLM\n(74.4%->82.1% on GSM8K, 78.2%->83.0% on DROP, 90.0%->94.4% on OpenBookQA, and\n63.4%->67.9% on ANLI-A3) and achieves state-of-the-art-level performance,\nwithout any ground truth label. We conduct ablation studies and show that\nfine-tuning on reasoning is critical for self-improvement.", + "authors": "Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han", + "published": "2022-10-20", + "updated": "2022-10-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.03442v2", + "title": "Generative Agents: Interactive Simulacra of Human Behavior", + "abstract": "Believable proxies of human behavior can empower interactive applications\nranging from immersive environments to rehearsal spaces for interpersonal\ncommunication to prototyping tools. In this paper, we introduce generative\nagents--computational software agents that simulate believable human behavior.\nGenerative agents wake up, cook breakfast, and head to work; artists paint,\nwhile authors write; they form opinions, notice each other, and initiate\nconversations; they remember and reflect on days past as they plan the next\nday. To enable generative agents, we describe an architecture that extends a\nlarge language model to store a complete record of the agent's experiences\nusing natural language, synthesize those memories over time into higher-level\nreflections, and retrieve them dynamically to plan behavior. We instantiate\ngenerative agents to populate an interactive sandbox environment inspired by\nThe Sims, where end users can interact with a small town of twenty five agents\nusing natural language. In an evaluation, these generative agents produce\nbelievable individual and emergent social behaviors: for example, starting with\nonly a single user-specified notion that one agent wants to throw a Valentine's\nDay party, the agents autonomously spread invitations to the party over the\nnext two days, make new acquaintances, ask each other out on dates to the\nparty, and coordinate to show up for the party together at the right time. We\ndemonstrate through ablation that the components of our agent\narchitecture--observation, planning, and reflection--each contribute critically\nto the believability of agent behavior. By fusing large language models with\ncomputational, interactive agents, this work introduces architectural and\ninteraction patterns for enabling believable simulations of human behavior.", + "authors": "Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein", + "published": "2023-04-07", + "updated": "2023-08-06", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14564v1", + "title": "PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents", + "abstract": "Strategies such as chain-of-thought prompting improve the performance of\nlarge language models (LLMs) on complex reasoning tasks by decomposing input\nexamples into intermediate steps. However, it remains unclear how to apply such\nmethods to reason over long input documents, in which both the decomposition\nand the output of each intermediate step are non-trivial to obtain. In this\nwork, we propose PEARL, a prompting framework to improve reasoning over long\ndocuments, which consists of three stages: action mining, plan formulation, and\nplan execution. More specifically, given a question about a long document,\nPEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE,\nFIND_EVENT, FIND_RELATION) and then executes them over the document to obtain\nthe answer. Each stage of PEARL is implemented via zero-shot or few-shot\nprompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate\nPEARL on a challenging subset of the QuALITY dataset, which contains questions\nthat require complex reasoning over long narrative texts. PEARL outperforms\nzero-shot and chain-of-thought prompting on this dataset, and ablation\nexperiments show that each stage of PEARL is critical to its performance.\nOverall, PEARL is a first step towards leveraging LLMs to reason over long\ndocuments.", + "authors": "Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer", + "published": "2023-05-23", + "updated": "2023-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.05300v4", + "title": "Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration", + "abstract": "Human intelligence thrives on cognitive synergy, where collaboration among\ndifferent minds yield superior outcomes compared to isolated individuals. In\nthis work, we propose Solo Performance Prompting (SPP), which transforms a\nsingle LLM into a cognitive synergist by engaging in multi-turn\nself-collaboration with multiple personas. A cognitive synergist is an\nintelligent agent that collaboratively combines multiple minds' strengths and\nknowledge to enhance problem-solving in complex tasks. By dynamically\nidentifying and simulating different personas based on task inputs, SPP\nunleashes the potential of cognitive synergy in LLMs. Our in-depth analysis\nshows that assigning multiple fine-grained personas in LLMs improves\nproblem-solving abilities compared to using a single or fixed number of\npersonas. We evaluate SPP on three challenging tasks: Trivia Creative Writing,\nCodenames Collaborative, and Logic Grid Puzzle, encompassing both\nknowledge-intensive and reasoning-intensive types. Unlike previous works, such\nas Chain-of-Thought, that solely enhance the reasoning abilities in LLMs,\nexperimental results demonstrate that SPP effectively reduces factual\nhallucination, and maintains strong reasoning capabilities. Additionally,\ncomparative experiments show that cognitive synergy only emerges in GPT-4 and\ndoes not appear in less capable models, such as GPT-3.5-turbo and\nLlama2-13b-chat, which draws an interesting analogy to human development. Code,\ndata, and prompts can be found at:\nhttps://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.", + "authors": "Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji", + "published": "2023-07-11", + "updated": "2024-03-26", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.05981v1", + "title": "Double Retrieval and Ranking for Accurate Question Answering", + "abstract": "Recent work has shown that an answer verification step introduced in\nTransformer-based answer selection models can significantly improve the state\nof the art in Question Answering. This step is performed by aggregating the\nembeddings of top $k$ answer candidates to support the verification of a target\nanswer. Although the approach is intuitive and sound still shows two\nlimitations: (i) the supporting candidates are ranked only according to the\nrelevancy with the question and not with the answer, and (ii) the support\nprovided by the other answer candidates is suboptimal as these are retrieved\nindependently of the target answer. In this paper, we address both drawbacks by\nproposing (i) a double reranking model, which, for each target answer, selects\nthe best support; and (ii) a second neural retrieval stage designed to encode\nquestion and answer pair as the query, which finds more specific verification\ninformation. The results on three well-known datasets for AS2 show consistent\nand significant improvement of the state of the art.", + "authors": "Zeyu Zhang, Thuy Vu, Alessandro Moschitti", + "published": "2022-01-16", + "updated": "2022-01-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.07682v2", + "title": "Emergent Abilities of Large Language Models", + "abstract": "Scaling up language models has been shown to predictably improve performance\nand sample efficiency on a wide range of downstream tasks. This paper instead\ndiscusses an unpredictable phenomenon that we refer to as emergent abilities of\nlarge language models. We consider an ability to be emergent if it is not\npresent in smaller models but is present in larger models. Thus, emergent\nabilities cannot be predicted simply by extrapolating the performance of\nsmaller models. The existence of such emergence implies that additional scaling\ncould further expand the range of capabilities of language models.", + "authors": "Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus", + "published": "2022-06-15", + "updated": "2022-10-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.13551v1", + "title": "Graph Representation of Narrative Context: Coherence Dependency via Retrospective Questions", + "abstract": "This work introduces a novel and practical paradigm for narrative\ncomprehension, stemming from the observation that individual passages within\nnarratives are often cohesively related than being isolated. We therefore\npropose to formulate a graph upon narratives dubbed NARCO that depicts a\ntask-agnostic coherence dependency of the entire context. Especially, edges in\nNARCO encompass retrospective free-form questions between two context snippets\nreflecting high-level coherent relations, inspired by the cognitive perception\nof humans who constantly reinstate relevant events from prior context.\nImportantly, our graph is instantiated through our designed two-stage LLM\nprompting, thereby without reliance on human annotations. We present three\nunique studies on its practical utility, examining the edge efficacy via recap\nidentification, local context augmentation via plot retrieval, and broader\napplications exemplified by long document QA. Experiments suggest that our\napproaches leveraging NARCO yield performance boost across all three tasks.", + "authors": "Liyan Xu, Jiangnan Li, Mo Yu, Jie Zhou", + "published": "2024-02-21", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.02311v5", + "title": "PaLM: Scaling Language Modeling with Pathways", + "abstract": "Large language models have been shown to achieve remarkable performance\nacross a variety of natural language tasks using few-shot learning, which\ndrastically reduces the number of task-specific training examples needed to\nadapt the model to a particular application. To further our understanding of\nthe impact of scale on few-shot learning, we trained a 540-billion parameter,\ndensely activated, Transformer language model, which we call Pathways Language\nModel PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML\nsystem which enables highly efficient training across multiple TPU Pods. We\ndemonstrate continued benefits of scaling by achieving state-of-the-art\nfew-shot learning results on hundreds of language understanding and generation\nbenchmarks. On a number of these tasks, PaLM 540B achieves breakthrough\nperformance, outperforming the finetuned state-of-the-art on a suite of\nmulti-step reasoning tasks, and outperforming average human performance on the\nrecently released BIG-bench benchmark. A significant number of BIG-bench tasks\nshowed discontinuous improvements from model scale, meaning that performance\nsteeply increased as we scaled to our largest model. PaLM also has strong\ncapabilities in multilingual tasks and source code generation, which we\ndemonstrate on a wide array of benchmarks. We additionally provide a\ncomprehensive analysis on bias and toxicity, and study the extent of training\ndata memorization with respect to model scale. Finally, we discuss the ethical\nconsiderations related to large language models and discuss potential\nmitigation strategies.", + "authors": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel", + "published": "2022-04-05", + "updated": "2022-10-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.03442v2", + "title": "Generative Agents: Interactive Simulacra of Human Behavior", + "abstract": "Believable proxies of human behavior can empower interactive applications\nranging from immersive environments to rehearsal spaces for interpersonal\ncommunication to prototyping tools. In this paper, we introduce generative\nagents--computational software agents that simulate believable human behavior.\nGenerative agents wake up, cook breakfast, and head to work; artists paint,\nwhile authors write; they form opinions, notice each other, and initiate\nconversations; they remember and reflect on days past as they plan the next\nday. To enable generative agents, we describe an architecture that extends a\nlarge language model to store a complete record of the agent's experiences\nusing natural language, synthesize those memories over time into higher-level\nreflections, and retrieve them dynamically to plan behavior. We instantiate\ngenerative agents to populate an interactive sandbox environment inspired by\nThe Sims, where end users can interact with a small town of twenty five agents\nusing natural language. In an evaluation, these generative agents produce\nbelievable individual and emergent social behaviors: for example, starting with\nonly a single user-specified notion that one agent wants to throw a Valentine's\nDay party, the agents autonomously spread invitations to the party over the\nnext two days, make new acquaintances, ask each other out on dates to the\nparty, and coordinate to show up for the party together at the right time. We\ndemonstrate through ablation that the components of our agent\narchitecture--observation, planning, and reflection--each contribute critically\nto the believability of agent behavior. By fusing large language models with\ncomputational, interactive agents, this work introduces architectural and\ninteraction patterns for enabling believable simulations of human behavior.", + "authors": "Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein", + "published": "2023-04-07", + "updated": "2023-08-06", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.08073v1", + "title": "Constitutional AI: Harmlessness from AI Feedback", + "abstract": "As AI systems become more capable, we would like to enlist their help to\nsupervise other AIs. We experiment with methods for training a harmless AI\nassistant through self-improvement, without any human labels identifying\nharmful outputs. The only human oversight is provided through a list of rules\nor principles, and so we refer to the method as 'Constitutional AI'. The\nprocess involves both a supervised learning and a reinforcement learning phase.\nIn the supervised phase we sample from an initial model, then generate\nself-critiques and revisions, and then finetune the original model on revised\nresponses. In the RL phase, we sample from the finetuned model, use a model to\nevaluate which of the two samples is better, and then train a preference model\nfrom this dataset of AI preferences. We then train with RL using the preference\nmodel as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a\nresult we are able to train a harmless but non-evasive AI assistant that\nengages with harmful queries by explaining its objections to them. Both the SL\nand RL methods can leverage chain-of-thought style reasoning to improve the\nhuman-judged performance and transparency of AI decision making. These methods\nmake it possible to control AI behavior more precisely and with far fewer human\nlabels.", + "authors": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, Jared Kaplan", + "published": "2022-12-15", + "updated": "2022-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.18703v7", + "title": "Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey", + "abstract": "Large language models (LLMs) have significantly advanced the field of natural\nlanguage processing (NLP), providing a highly useful, task-agnostic foundation\nfor a wide range of applications. However, directly applying LLMs to solve\nsophisticated problems in specific domains meets many hurdles, caused by the\nheterogeneity of domain data, the sophistication of domain knowledge, the\nuniqueness of domain objectives, and the diversity of the constraints (e.g.,\nvarious social norms, cultural conformity, religious beliefs, and ethical\nstandards in the domain applications). Domain specification techniques are key\nto make large language models disruptive in many applications. Specifically, to\nsolve these hurdles, there has been a notable increase in research and\npractices conducted in recent years on the domain specialization of LLMs. This\nemerging field of study, with its substantial potential for impact,\nnecessitates a comprehensive and systematic review to better summarize and\nguide ongoing work in this area. In this article, we present a comprehensive\nsurvey on domain specification techniques for large language models, an\nemerging direction critical for large language model applications. First, we\npropose a systematic taxonomy that categorizes the LLM domain-specialization\ntechniques based on the accessibility to LLMs and summarizes the framework for\nall the subcategories as well as their relations and differences to each other.\nSecond, we present an extensive taxonomy of critical application domains that\ncan benefit dramatically from specialized LLMs, discussing their practical\nsignificance and open challenges. Last, we offer our insights into the current\nresearch status and future trends in this area.", + "authors": "Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao", + "published": "2023-05-30", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10435v2", + "title": "PAL: Program-aided Language Models", + "abstract": "Large language models (LLMs) have recently demonstrated an impressive ability\nto perform arithmetic and symbolic reasoning tasks, when provided with a few\nexamples at test time (\"few-shot prompting\"). Much of this success can be\nattributed to prompting methods such as \"chain-of-thought'', which employ LLMs\nfor both understanding the problem description by decomposing it into steps, as\nwell as solving each step of the problem. While LLMs seem to be adept at this\nsort of step-by-step decomposition, LLMs often make logical and arithmetic\nmistakes in the solution part, even when the problem is decomposed correctly.\nIn this paper, we present Program-Aided Language models (PAL): a novel approach\nthat uses the LLM to read natural language problems and generate programs as\nthe intermediate reasoning steps, but offloads the solution step to a runtime\nsuch as a Python interpreter. With PAL, decomposing the natural language\nproblem into runnable steps remains the only learning task for the LLM, while\nsolving is delegated to the interpreter. We demonstrate this synergy between a\nneural LLM and a symbolic interpreter across 13 mathematical, symbolic, and\nalgorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all\nthese natural language reasoning tasks, generating code using an LLM and\nreasoning using a Python interpreter leads to more accurate results than much\nlarger models. For example, PAL using Codex achieves state-of-the-art few-shot\naccuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B\nwhich uses chain-of-thought by absolute 15% top-1. Our code and data are\npublicly available at http://reasonwithpal.com/ .", + "authors": "Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig", + "published": "2022-11-18", + "updated": "2023-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.17651v2", + "title": "Self-Refine: Iterative Refinement with Self-Feedback", + "abstract": "Like humans, large language models (LLMs) do not always generate the best\noutput on their first try. Motivated by how humans refine their written text,\nwe introduce Self-Refine, an approach for improving initial outputs from LLMs\nthrough iterative feedback and refinement. The main idea is to generate an\ninitial output using an LLMs; then, the same LLMs provides feedback for its\noutput and uses it to refine itself, iteratively. Self-Refine does not require\nany supervised training data, additional training, or reinforcement learning,\nand instead uses a single LLM as the generator, refiner, and feedback provider.\nWe evaluate Self-Refine across 7 diverse tasks, ranging from dialog response\ngeneration to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT,\nand GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine\nare preferred by humans and automatic metrics over those generated with the\nsame LLM using conventional one-step generation, improving by ~20% absolute on\naverage in task performance. Our work demonstrates that even state-of-the-art\nLLMs like GPT-4 can be further improved at test time using our simple,\nstandalone approach.", + "authors": "Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark", + "published": "2023-03-30", + "updated": "2023-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.03442v2", + "title": "Generative Agents: Interactive Simulacra of Human Behavior", + "abstract": "Believable proxies of human behavior can empower interactive applications\nranging from immersive environments to rehearsal spaces for interpersonal\ncommunication to prototyping tools. In this paper, we introduce generative\nagents--computational software agents that simulate believable human behavior.\nGenerative agents wake up, cook breakfast, and head to work; artists paint,\nwhile authors write; they form opinions, notice each other, and initiate\nconversations; they remember and reflect on days past as they plan the next\nday. To enable generative agents, we describe an architecture that extends a\nlarge language model to store a complete record of the agent's experiences\nusing natural language, synthesize those memories over time into higher-level\nreflections, and retrieve them dynamically to plan behavior. We instantiate\ngenerative agents to populate an interactive sandbox environment inspired by\nThe Sims, where end users can interact with a small town of twenty five agents\nusing natural language. In an evaluation, these generative agents produce\nbelievable individual and emergent social behaviors: for example, starting with\nonly a single user-specified notion that one agent wants to throw a Valentine's\nDay party, the agents autonomously spread invitations to the party over the\nnext two days, make new acquaintances, ask each other out on dates to the\nparty, and coordinate to show up for the party together at the right time. We\ndemonstrate through ablation that the components of our agent\narchitecture--observation, planning, and reflection--each contribute critically\nto the believability of agent behavior. By fusing large language models with\ncomputational, interactive agents, this work introduces architectural and\ninteraction patterns for enabling believable simulations of human behavior.", + "authors": "Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein", + "published": "2023-04-07", + "updated": "2023-08-06", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.10601v2", + "title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models", + "abstract": "Language models are increasingly being deployed for general problem solving\nacross a wide range of tasks, but are still confined to token-level,\nleft-to-right decision-making processes during inference. This means they can\nfall short in tasks that require exploration, strategic lookahead, or where\ninitial decisions play a pivotal role. To surmount these challenges, we\nintroduce a new framework for language model inference, Tree of Thoughts (ToT),\nwhich generalizes over the popular Chain of Thought approach to prompting\nlanguage models, and enables exploration over coherent units of text (thoughts)\nthat serve as intermediate steps toward problem solving. ToT allows LMs to\nperform deliberate decision making by considering multiple different reasoning\npaths and self-evaluating choices to decide the next course of action, as well\nas looking ahead or backtracking when necessary to make global choices. Our\nexperiments show that ToT significantly enhances language models'\nproblem-solving abilities on three novel tasks requiring non-trivial planning\nor search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in\nGame of 24, while GPT-4 with chain-of-thought prompting only solved 4% of\ntasks, our method achieved a success rate of 74%. Code repo with all prompts:\nhttps://github.com/princeton-nlp/tree-of-thought-llm.", + "authors": "Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan", + "published": "2023-05-17", + "updated": "2023-12-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.01116v1", + "title": "The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only", + "abstract": "Large language models are commonly trained on a mixture of filtered web data\nand curated high-quality corpora, such as social media conversations, books, or\ntechnical papers. This curation process is believed to be necessary to produce\nperformant models with broad zero-shot generalization abilities. However, as\nlarger models requiring pretraining on trillions of tokens are considered, it\nis unclear how scalable is curation and whether we will run out of unique\nhigh-quality data soon. At variance with previous beliefs, we show that\nproperly filtered and deduplicated web data alone can lead to powerful models;\neven significantly outperforming models from the state-of-the-art trained on\nThe Pile. Despite extensive filtering, the high-quality data we extract from\nthe web is still plentiful, and we are able to obtain five trillion tokens from\nCommonCrawl. We publicly release an extract of 600 billion tokens from our\nRefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.", + "authors": "Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, Julien Launay", + "published": "2023-06-01", + "updated": "2023-06-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.17760v2", + "title": "CAMEL: Communicative Agents for \"Mind\" Exploration of Large Language Model Society", + "abstract": "The rapid advancement of chat-based language models has led to remarkable\nprogress in complex task-solving. However, their success heavily relies on\nhuman input to guide the conversation, which can be challenging and\ntime-consuming. This paper explores the potential of building scalable\ntechniques to facilitate autonomous cooperation among communicative agents, and\nprovides insight into their \"cognitive\" processes. To address the challenges of\nachieving autonomous cooperation, we propose a novel communicative agent\nframework named role-playing. Our approach involves using inception prompting\nto guide chat agents toward task completion while maintaining consistency with\nhuman intentions. We showcase how role-playing can be used to generate\nconversational data for studying the behaviors and capabilities of a society of\nagents, providing a valuable resource for investigating conversational language\nmodels. In particular, we conduct comprehensive studies on\ninstruction-following cooperation in multi-agent settings. Our contributions\ninclude introducing a novel communicative agent framework, offering a scalable\napproach for studying the cooperative behaviors and capabilities of multi-agent\nsystems, and open-sourcing our library to support research on communicative\nagents and beyond: https://github.com/camel-ai/camel.", + "authors": "Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, Bernard Ghanem", + "published": "2023-03-31", + "updated": "2023-11-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.CY", + "cs.LG", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.18703v7", + "title": "Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey", + "abstract": "Large language models (LLMs) have significantly advanced the field of natural\nlanguage processing (NLP), providing a highly useful, task-agnostic foundation\nfor a wide range of applications. However, directly applying LLMs to solve\nsophisticated problems in specific domains meets many hurdles, caused by the\nheterogeneity of domain data, the sophistication of domain knowledge, the\nuniqueness of domain objectives, and the diversity of the constraints (e.g.,\nvarious social norms, cultural conformity, religious beliefs, and ethical\nstandards in the domain applications). Domain specification techniques are key\nto make large language models disruptive in many applications. Specifically, to\nsolve these hurdles, there has been a notable increase in research and\npractices conducted in recent years on the domain specialization of LLMs. This\nemerging field of study, with its substantial potential for impact,\nnecessitates a comprehensive and systematic review to better summarize and\nguide ongoing work in this area. In this article, we present a comprehensive\nsurvey on domain specification techniques for large language models, an\nemerging direction critical for large language model applications. First, we\npropose a systematic taxonomy that categorizes the LLM domain-specialization\ntechniques based on the accessibility to LLMs and summarizes the framework for\nall the subcategories as well as their relations and differences to each other.\nSecond, we present an extensive taxonomy of critical application domains that\ncan benefit dramatically from specialized LLMs, discussing their practical\nsignificance and open challenges. Last, we offer our insights into the current\nresearch status and future trends in this area.", + "authors": "Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao", + "published": "2023-05-30", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.11366v4", + "title": "Reflexion: Language Agents with Verbal Reinforcement Learning", + "abstract": "Large language models (LLMs) have been increasingly used to interact with\nexternal environments (e.g., games, compilers, APIs) as goal-driven agents.\nHowever, it remains challenging for these language agents to quickly and\nefficiently learn from trial-and-error as traditional reinforcement learning\nmethods require extensive training samples and expensive model fine-tuning. We\npropose Reflexion, a novel framework to reinforce language agents not by\nupdating weights, but instead through linguistic feedback. Concretely,\nReflexion agents verbally reflect on task feedback signals, then maintain their\nown reflective text in an episodic memory buffer to induce better\ndecision-making in subsequent trials. Reflexion is flexible enough to\nincorporate various types (scalar values or free-form language) and sources\n(external or internally simulated) of feedback signals, and obtains significant\nimprovements over a baseline agent across diverse tasks (sequential\ndecision-making, coding, language reasoning). For example, Reflexion achieves a\n91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous\nstate-of-the-art GPT-4 that achieves 80%. We also conduct ablation and analysis\nstudies using different feedback signals, feedback incorporation methods, and\nagent types, and provide insights into how they affect performance.", + "authors": "Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao", + "published": "2023-03-20", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.01068v4", + "title": "OPT: Open Pre-trained Transformer Language Models", + "abstract": "Large language models, which are often trained for hundreds of thousands of\ncompute days, have shown remarkable capabilities for zero- and few-shot\nlearning. Given their computational cost, these models are difficult to\nreplicate without significant capital. For the few that are available through\nAPIs, no access is granted to the full model weights, making them difficult to\nstudy. We present Open Pre-trained Transformers (OPT), a suite of decoder-only\npre-trained transformers ranging from 125M to 175B parameters, which we aim to\nfully and responsibly share with interested researchers. We show that OPT-175B\nis comparable to GPT-3, while requiring only 1/7th the carbon footprint to\ndevelop. We are also releasing our logbook detailing the infrastructure\nchallenges we faced, along with code for experimenting with all of the released\nmodels.", + "authors": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer", + "published": "2022-05-02", + "updated": "2022-06-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14688v1", + "title": "ExpertPrompting: Instructing Large Language Models to be Distinguished Experts", + "abstract": "The answering quality of an aligned large language model (LLM) can be\ndrastically improved if treated with proper crafting of prompts. In this paper,\nwe propose ExpertPrompting to elicit the potential of LLMs to answer as\ndistinguished experts. We first utilize In-Context Learning to automatically\nsynthesize detailed and customized descriptions of the expert identity for each\nspecific instruction, and then ask LLMs to provide answer conditioned on such\nagent background. Based on this augmented prompting strategy, we produce a new\nset of instruction-following data using GPT-3.5, and train a competitive\nopen-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation\nto show that 1) the expert data is of significantly higher quality than vanilla\nanswers, and 2) ExpertLLaMA outperforms existing open-source opponents and\nachieves 96\\% of the original ChatGPT's capability. All data and the\nExpertLLaMA model will be made publicly available at\n\\url{https://github.com/OFA-Sys/ExpertLLaMA}.", + "authors": "Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, Zhendong Mao", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.00352v5", + "title": "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework", + "abstract": "Remarkable progress has been made on automated problem solving through\nsocieties of agents based on large language models (LLMs). Existing LLM-based\nmulti-agent systems can already solve simple dialogue tasks. Solutions to more\ncomplex tasks, however, are complicated through logic inconsistencies due to\ncascading hallucinations caused by naively chaining LLMs. Here we introduce\nMetaGPT, an innovative meta-programming framework incorporating efficient human\nworkflows into LLM-based multi-agent collaborations. MetaGPT encodes\nStandardized Operating Procedures (SOPs) into prompt sequences for more\nstreamlined workflows, thus allowing agents with human-like domain expertise to\nverify intermediate results and reduce errors. MetaGPT utilizes an assembly\nline paradigm to assign diverse roles to various agents, efficiently breaking\ndown complex tasks into subtasks involving many agents working together. On\ncollaborative software engineering benchmarks, MetaGPT generates more coherent\nsolutions than previous chat-based multi-agent systems. Our project can be\nfound at https://github.com/geekan/MetaGPT", + "authors": "Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, J\u00fcrgen Schmidhuber", + "published": "2023-08-01", + "updated": "2023-11-06", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14325v1", + "title": "Improving Factuality and Reasoning in Language Models through Multiagent Debate", + "abstract": "Large language models (LLMs) have demonstrated remarkable capabilities in\nlanguage generation, understanding, and few-shot learning in recent years. An\nextensive body of work has explored how their performance may be further\nimproved through the tools of prompting, ranging from verification,\nself-consistency, or intermediate scratchpads. In this paper, we present a\ncomplementary approach to improve language responses where multiple language\nmodel instances propose and debate their individual responses and reasoning\nprocesses over multiple rounds to arrive at a common final answer. Our findings\nindicate that this approach significantly enhances mathematical and strategic\nreasoning across a number of tasks. We also demonstrate that our approach\nimproves the factual validity of generated content, reducing fallacious answers\nand hallucinations that contemporary models are prone to. Our approach may be\ndirectly applied to existing black-box models and uses identical procedure and\nprompts for all tasks we investigate. Overall, our findings suggest that such\n\"society of minds\" approach has the potential to significantly advance the\ncapabilities of LLMs and pave the way for further breakthroughs in language\ngeneration and understanding.", + "authors": "Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch", + "published": "2023-05-23", + "updated": "2023-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.12588v4", + "title": "Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks", + "abstract": "Recently, there has been significant progress in teaching language models to\nperform step-by-step reasoning to solve complex numerical reasoning tasks.\nChain-of-thoughts prompting (CoT) is by far the state-of-art method for these\ntasks. CoT uses language models to perform both reasoning and computation in\nthe multi-step `thought' process. To disentangle computation from reasoning, we\npropose `Program of Thoughts' (PoT), which uses language models (mainly Codex)\nto express the reasoning process as a program. The computation is relegated to\nan external computer, which executes the generated programs to derive the\nanswer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP,\nTabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA)\nfor both few-shot and zero-shot setups. Under both few-shot and zero-shot\nsettings, PoT can show an average performance gain over CoT by around 12\\%\nacross all the evaluated datasets. By combining PoT with self-consistency\ndecoding, we can achieve SoTA performance on all math problem datasets and\nnear-SoTA performance on financial datasets. All of our data and code are\nreleased in Github https://github.com/wenhuchen/Program-of-Thoughts", + "authors": "Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen", + "published": "2022-11-22", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.12032v5", + "title": "From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning", + "abstract": "In the realm of Large Language Models (LLMs), the balance between instruction\ndata quality and quantity is a focal point. Recognizing this, we introduce a\nself-guided methodology for LLMs to autonomously discern and select cherry\nsamples from open-source datasets, effectively minimizing manual curation and\npotential cost for instruction tuning an LLM. Our key innovation, the\nInstruction-Following Difficulty (IFD) metric, emerges as a pivotal metric to\nidentify discrepancies between a model's expected responses and its intrinsic\ngeneration capability. Through the application of IFD, cherry samples can be\npinpointed, leading to a marked uptick in model training efficiency. Empirical\nvalidations on datasets like Alpaca and WizardLM underpin our findings; with a\nmere $10\\%$ of original data input, our strategy showcases improved results.\nThis synthesis of self-guided cherry-picking and the IFD metric signifies a\ntransformative leap in the instruction tuning of LLMs, promising both\nefficiency and resource-conscious advancements. Codes, data, and models are\navailable: https://github.com/tianyi-lab/Cherry_LLM", + "authors": "Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, Jing Xiao", + "published": "2023-08-23", + "updated": "2024-04-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.08155v2", + "title": "AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation", + "abstract": "AutoGen is an open-source framework that allows developers to build LLM\napplications via multiple agents that can converse with each other to\naccomplish tasks. AutoGen agents are customizable, conversable, and can operate\nin various modes that employ combinations of LLMs, human inputs, and tools.\nUsing AutoGen, developers can also flexibly define agent interaction behaviors.\nBoth natural language and computer code can be used to program flexible\nconversation patterns for different applications. AutoGen serves as a generic\ninfrastructure to build diverse applications of various complexities and LLM\ncapacities. Empirical studies demonstrate the effectiveness of the framework in\nmany example applications, with domains ranging from mathematics, coding,\nquestion answering, operations research, online decision-making, entertainment,\netc.", + "authors": "Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang", + "published": "2023-08-16", + "updated": "2023-10-03", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.04091v3", + "title": "Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models", + "abstract": "Large language models (LLMs) have recently been shown to deliver impressive\nperformance in various NLP tasks. To tackle multi-step reasoning tasks,\nfew-shot chain-of-thought (CoT) prompting includes a few manually crafted\nstep-by-step reasoning demonstrations which enable LLMs to explicitly generate\nreasoning steps and improve their reasoning task accuracy. To eliminate the\nmanual effort, Zero-shot-CoT concatenates the target problem statement with\n\"Let's think step by step\" as an input prompt to LLMs. Despite the success of\nZero-shot-CoT, it still suffers from three pitfalls: calculation errors,\nmissing-step errors, and semantic misunderstanding errors. To address the\nmissing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of\ntwo components: first, devising a plan to divide the entire task into smaller\nsubtasks, and then carrying out the subtasks according to the plan. To address\nthe calculation errors and improve the quality of generated reasoning steps, we\nextend PS prompting with more detailed instructions and derive PS+ prompting.\nWe evaluate our proposed prompting strategy on ten datasets across three\nreasoning problems. The experimental results over GPT-3 show that our proposed\nzero-shot prompting consistently outperforms Zero-shot-CoT across all datasets\nby a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought\nPrompting, and has comparable performance with 8-shot CoT prompting on the math\nreasoning problem. The code can be found at\nhttps://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.", + "authors": "Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim", + "published": "2023-05-06", + "updated": "2023-05-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.04684v2", + "title": "Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind", + "abstract": "When reading a story, humans can quickly understand new fictional characters\nwith a few observations, mainly by drawing analogies to fictional and real\npeople they already know. This reflects the few-shot and meta-learning essence\nof humans' inference of characters' mental states, i.e., theory-of-mind (ToM),\nwhich is largely ignored in existing research. We fill this gap with a novel\nNLP dataset, ToM-in-AMC, the first assessment of machines' meta-learning of ToM\nin a realistic narrative understanding scenario. Our dataset consists of ~1,000\nparsed movie scripts, each corresponding to a few-shot character understanding\ntask that requires models to mimic humans' ability of fast digesting characters\nwith a few starting scenes in a new movie.\n We propose a novel ToM prompting approach designed to explicitly assess the\ninfluence of multiple ToM dimensions. It surpasses existing baseline models,\nunderscoring the significance of modeling multiple ToM dimensions for our task.\nOur extensive human study verifies that humans are capable of solving our\nproblem by inferring characters' mental states based on their previously seen\nmovies. In comparison, our systems based on either state-of-the-art large\nlanguage models (GPT-4) or meta-learning algorithms lags >20% behind,\nhighlighting a notable limitation in existing approaches' ToM capabilities.", + "authors": "Mo Yu, Qiujing Wang, Shunchi Zhang, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Liyan Xu, Jing Li, Yue Yu, Jie Zhou", + "published": "2022-11-09", + "updated": "2024-02-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.11760v7", + "title": "Large Language Models Understand and Can be Enhanced by Emotional Stimuli", + "abstract": "Emotional intelligence significantly impacts our daily behaviors and\ninteractions. Although Large Language Models (LLMs) are increasingly viewed as\na stride toward artificial general intelligence, exhibiting impressive\nperformance in numerous tasks, it is still uncertain if LLMs can genuinely\ngrasp psychological emotional stimuli. Understanding and responding to\nemotional cues gives humans a distinct advantage in problem-solving. In this\npaper, we take the first step towards exploring the ability of LLMs to\nunderstand emotional stimuli. To this end, we first conduct automatic\nexperiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,\nLlama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative\napplications that represent comprehensive evaluation scenarios. Our automatic\nexperiments show that LLMs have a grasp of emotional intelligence, and their\nperformance can be improved with emotional prompts (which we call\n\"EmotionPrompt\" that combines the original prompt with emotional stimuli),\ne.g., 8.00% relative performance improvement in Instruction Induction and 115%\nin BIG-Bench. In addition to those deterministic tasks that can be\nautomatically evaluated using existing metrics, we conducted a human study with\n106 participants to assess the quality of generative tasks using both vanilla\nand emotional prompts. Our human study results demonstrate that EmotionPrompt\nsignificantly boosts the performance of generative tasks (10.9% average\nimprovement in terms of performance, truthfulness, and responsibility metrics).\nWe provide an in-depth discussion regarding why EmotionPrompt works for LLMs\nand the factors that may influence its performance. We posit that EmotionPrompt\nheralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs\ninteraction.", + "authors": "Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie", + "published": "2023-07-14", + "updated": "2023-11-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.HC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.11672v2", + "title": "Open-ended Commonsense Reasoning with Unrestricted Answer Scope", + "abstract": "Open-ended Commonsense Reasoning is defined as solving a commonsense question\nwithout providing 1) a short list of answer candidates and 2) a pre-defined\nanswer scope. Conventional ways of formulating the commonsense question into a\nquestion-answering form or utilizing external knowledge to learn\nretrieval-based methods are less applicable in the open-ended setting due to an\ninherent challenge. Without pre-defining an answer scope or a few candidates,\nopen-ended commonsense reasoning entails predicting answers by searching over\nan extremely large searching space. Moreover, most questions require implicit\nmulti-hop reasoning, which presents even more challenges to our problem. In\nthis work, we leverage pre-trained language models to iteratively retrieve\nreasoning paths on the external knowledge base, which does not require\ntask-specific supervision. The reasoning paths can help to identify the most\nprecise answer to the commonsense question. We conduct experiments on two\ncommonsense benchmark datasets. Compared to other approaches, our proposed\nmethod achieves better performance both quantitatively and qualitatively.", + "authors": "Chen Ling, Xuchao Zhang, Xujiang Zhao, Yanchi Liu, Wei Cheng, Mika Oishi, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao", + "published": "2023-10-18", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14992v2", + "title": "Reasoning with Language Model is Planning with World Model", + "abstract": "Large language models (LLMs) have shown remarkable reasoning capabilities,\nespecially when prompted to generate intermediate reasoning steps (e.g.,\nChain-of-Thought, CoT). However, LLMs can still struggle with problems that are\neasy for humans, such as generating action plans for executing tasks in a given\nenvironment, or performing complex math, logical, and commonsense reasoning.\nThe deficiency stems from the key fact that LLMs lack an internal\n$\\textit{world model}$ to predict the world $\\textit{state}$ (e.g., environment\nstatus, intermediate variable values) and simulate long-term outcomes of\nactions. This prevents LLMs from performing deliberate planning akin to human\nbrains, which involves exploring alternative reasoning paths, anticipating\nfuture states and rewards, and iteratively refining existing reasoning steps.\nTo overcome the limitations, we propose a new LLM reasoning framework,\n$\\underline{R}$easoning vi$\\underline{a}$ $\\underline{P}$lanning\n$\\textbf{(RAP)}$. RAP repurposes the LLM as both a world model and a reasoning\nagent, and incorporates a principled planning algorithm (based on Monto Carlo\nTree Search) for strategic exploration in the vast reasoning space. During\nreasoning, the LLM (as agent) incrementally builds a reasoning tree under the\nguidance of the LLM (as world model) and task-specific rewards, and obtains a\nhigh-reward reasoning path efficiently with a proper balance between\nexploration $\\textit{vs.}$ exploitation. We apply RAP to a variety of\nchallenging reasoning problems including plan generation, math reasoning, and\nlogical inference. Empirical results on these tasks demonstrate the superiority\nof RAP over various strong baselines, including CoT and least-to-most prompting\nwith self-consistency. RAP on LLAMA-33B surpasses CoT on GPT-4 with 33%\nrelative improvement in a plan generation setting.", + "authors": "Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu", + "published": "2023-05-24", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.10625v3", + "title": "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models", + "abstract": "Chain-of-thought prompting has demonstrated remarkable performance on various\nnatural language reasoning tasks. However, it tends to perform poorly on tasks\nwhich requires solving problems harder than the exemplars shown in the prompts.\nTo overcome this challenge of easy-to-hard generalization, we propose a novel\nprompting strategy, least-to-most prompting. The key idea in this strategy is\nto break down a complex problem into a series of simpler subproblems and then\nsolve them in sequence. Solving each subproblem is facilitated by the answers\nto previously solved subproblems. Our experimental results on tasks related to\nsymbolic manipulation, compositional generalization, and math reasoning reveal\nthat least-to-most prompting is capable of generalizing to more difficult\nproblems than those seen in the prompts. A notable finding is that when the\nGPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve\nthe compositional generalization benchmark SCAN in any split (including length\nsplit) with an accuracy of at least 99% using just 14 exemplars, compared to\nonly 16% accuracy with chain-of-thought prompting. This is particularly\nnoteworthy because neural-symbolic models in the literature that specialize in\nsolving SCAN are trained on the entire training set containing over 15,000\nexamples. We have included prompts for all the tasks in the Appendix.", + "authors": "Denny Zhou, Nathanael Sch\u00e4rli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi", + "published": "2022-05-21", + "updated": "2023-04-16", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.01337v2", + "title": "An Empirical Study on Challenging Math Problem Solving with GPT-4", + "abstract": "Employing Large Language Models (LLMs) to address mathematical problems is an\nintriguing research endeavor, considering the abundance of math problems\nexpressed in natural language across numerous science and engineering fields.\nWhile several prior works have investigated solving elementary mathematics\nusing LLMs, this work explores the frontier of using GPT-4 for solving more\ncomplex and challenging math problems. We evaluate various ways of using GPT-4.\nSome of them are adapted from existing work, and one is MathChat, a\nconversational problem-solving framework newly proposed in this work. We\nperform the evaluation on difficult high school competition problems from the\nMATH dataset, which shows the advantage of the proposed conversational\napproach.", + "authors": "Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang", + "published": "2023-06-02", + "updated": "2023-06-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + } +] \ No newline at end of file