diff --git "a/related_34K/test_related_short_2404.17729v1.json" "b/related_34K/test_related_short_2404.17729v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.17729v1.json" @@ -0,0 +1,1431 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17729v1", + "title": "CoMM: Collaborative Multi-Agent, Multi-Reasoning-Path Prompting for Complex Problem Solving", + "abstract": "Large Language Models (LLMs) have shown great ability in solving traditional\nnatural language tasks and elementary reasoning tasks with appropriate\nprompting techniques. However, their ability is still limited in solving\ncomplicated science problems. In this work, we aim to push the upper bound of\nthe reasoning capability of LLMs by proposing a collaborative multi-agent,\nmulti-reasoning-path (CoMM) prompting framework. Specifically, we prompt LLMs\nto play different roles in a problem-solving team, and encourage different\nrole-play agents to collaboratively solve the target task. In particular, we\ndiscover that applying different reasoning paths for different roles is an\neffective strategy to implement few-shot prompting approaches in the\nmulti-agent scenarios. Empirical results demonstrate the effectiveness of the\nproposed methods on two college-level science problems over competitive\nbaselines. Our further analysis shows the necessity of prompting LLMs to play\ndifferent roles or experts independently. We release the code at:\nhttps://github.com/amazon-science/comm-prompt", + "authors": "Pei Chen, Boran Han, Shuai Zhang", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "LLMs have shown remarkable proficiency in solving many downstream tasks (Qu et al., 2020b; Chen et al., 2021; Xu et al., 2024c,b), paving the way towards Artificial General Intelligence. With the advent of GPT-3 (Brown et al., 2020) and its emergent abilities (Wei et al., 2022a) in solving downstream tasks on both zero-shot and few-shot settings, many decoder-only LLMs follow (Ling et al., 2023b), such as PaLM (Chowdhery et al., 2022), LLaMA (Brown et al., 2020; OpenAI, 2023), BLOOM (Workshop et al., 2023), Claude (Bai et al., 2022), OPT (Zhang et al., 2022), Mistral (Jiang et al., 2023), Falcon (Penedo et al., 2023) etc. Considering the inference speed and economic expenditure, we choose GPT-3.5 as the backbone model for all the baselines and our CoMM approach. In order to unlock the potential of the LLMs in solving downstream tasks (Yi and Qu, 2022; Chen et al., 2022; Qu et al., 2020a; Zhang et al., 2023; Yu et al., 2024; Xu et al., 2024a), many prompting approaches arise, exempting from manipulating the billion-level parameters (Li et al., 2023c). Among these prompting methods, ordinary prompting methods follow Brown et al. (2020) and employ task descriptions and sample demonstrations (fewshot) as the prompts for downstream tasks. To alleviate the difficulty of directly outputting the answer for LLMs, many prompting methods simplify the process by predicting the middle reasoning steps (chain-of-thought (Wei et al., 2022b)) or answering the decomposed sub-problems first (Wang et al., 2023b; Yao et al., 2023; Hao et al., 2023; Zhou et al., 2023; Ling et al., 2024). To overcome the lack of computing ability and outdated knowledge base, some work prompt LLMs to utilize external tools (Gao et al., 2023; Chen et al., 2023). To further unlock the ability of LLMs in solving complicated problems, agent-based methods that prompt LLMs to play specific roles trend. Among them, singe-agent methods only use one instance of LLMs. ExpertPrompt (Xu et al., 2023) prompts Step 1: Explain the environment and task as the system message prompt !!= \" !(!). Step 2: Prompt the three agents % \"(&), %#(&), %$ & to play the domain experts and summarizer, with their role name, responsibility and principles. Step 3: Each role will respond based on the current discussions, and when necessary, multi-turn dialogues are facilitated. Play Roles Response I solve the problem as a physicist, with physical reasoning path. I solve the problems as a mathematician, with mathematical reasoning path. I summarize. Answer Given a task ' Figure 2: Overall Framework of CoMM: An Example from College Physics with the Few-shot Setting. an LLM to play as a domain expert and successfully elicits the LLM to answer domain questions. EmotionPrompt (Li et al., 2023a) improves the performance of agents with emotional prompts. Huang et al. (2022); Shinn et al. (2023); Madaan et al. (2023) prompts LLMs to do self-reflection or self-refinement to correct the mistakes. Wang et al. (2023a); Sun et al. (2023) prompts LLMs to do planning before solving a specific task. Wang et al. (2023c) prompts a single agent to play multiple roles with different personas. Another branch of agent-based approaches are with multi-agents. For example, Liang et al. (2023); Chan et al. (2023); Du et al. (2023) prompt LLMs to play different roles in debating for problemsolving. ChatEval (Chan et al., 2023) uses multiple agents debating for automatic LLM evaluation. MathChat (Wu et al., 2023b) proposed a conversational framework to solve math problems with the user and LLM agent\u2019s interactions. Park et al. (2023) and Li et al. (2023b) prompts LLMs to play as different agents for simulating human behaviors. Our work is closely related to these works, but our aim is to prompt LLMs to play different domain experts in a collaborative framework on complicated reasoning problems, and how to embed the few-shot examples into the multi-agent framework. Along with the agent-based prompting methods, many open-sourced applications come out. For example, AutoGPT (Wu et al., 2023a) plays as AI agents that will attempt to achieve a given goal by breaking it into sub-tasks and using the internet and other tools in an automatic loop. AutoGen (Wu et al., 2023a) designs a framework for building LLM applications based on multi-agent conversations. MetaGPT (Hong et al., 2023) prompts multi-agent to play product managers, architects, project managers, and engineers for a software project. SkyAGI (Park et al., 2023) emerges humanbehavior simulation capability in LLM. While sharing the same multi-agent framework, our work focuses on exploring the effectiveness of the framework, i.e., we aim to answer whether multi-agent is necessary and how to prompt multiple agents to work collaboratively.", + "pre_questions": [], + "main_content": "Introduction Large Language Models (LLMs) such as GPT (Brown et al., 2020; OpenAI, 2023), LLaMA (Touvron et al., 2023a,b) and PaLM (Chowdhery et al., 2022), have shown remarkable proficiency in solving many downstream tasks (Liu et al., 2021), without furthering fine-tuning the model parameters. However, their ability is limited to solving reasoning and mathematical problems (Wei et al., 2022b), especially complicated science problems (Ma et al., 2023; Xu et al., 2023; Ling et al., 2023a). In consideration of this limitation, and the costly fine-tuning overhead of the LLMs with billion-level parameters, many \u2217Work done as an intern at Amazon Web Services. \u2020Corresponding author. prompting methods emerge, i.e., the process of carefully crafting input queries to effectively communicate with LLMs and obtain desired outputs. Apart from the benefit of exempting from manipulating the parameters of the LLMs, these prompting methods seamlessly integrate the pre-trained models into downstream tasks by eliciting desired model behaviors (Sahoo et al., 2024). Among these endeavored prompting approaches, some of them prompt LLMs to reason with multiple middle-steps or subproblems for reasoning tasks (Wei et al., 2022b; Wang et al., 2023b; Yao et al., 2023; Hao et al., 2023; Zhou et al., 2023), some of them prompt LLMs to take advantage of external tools (Gao et al., 2023; Chen et al., 2023). Despite that these methods have further pushed the upper-bound reasoning and math abilities of LLMs, their potential for complicated science problems is still awaiting further improvement. Take the college-level physical problem in Figure 1 as an example, we can observe that the Chain-of-thought prompting method still suffers from knowledge and factual errors (incorrect equations used for the given problem) and computing errors (incorrect calculations). Recently, agent-based prompting methods that prompt an LLM to play a specific role or act as an intelligent agent further unlock the ability of LLMs to solve complicated problems. For example, Xu et al. (2023) prompt an LLM to play as a domain expert and successfully elicits the LLM to answer domain questions. Huang et al. (2022); Shinn et al. (2023); Madaan et al. (2023) prompt LLMs to do self-reflection or self-refinement to correct the mistakes. Wang et al. (2023a); Sun et al. (2023) prompt LLMs to do planning before solving a specific task. Wang et al. (2023c) prompts a single agent to play multiple roles with different personas, and Liang et al. (2023); Chan et al. (2023); Du et al. (2023) prompt LLMs to play different roles in debating arXiv:2404.17729v1 [cs.CL] 26 Apr 2024 Question: Light from a laser falls on a pair of very narrow slits separated by 0.5 micrometer, and bright fringes separated by 1.0 millimeter are observed on a distant screen. If the frequency of the laser light is doubled, what will be the separation of the bright fringes? Correct Answer: 0.5 mm Complex Science Problem Answer: Let\u2019s think step-by-step. The separation of the bright fringes in a double-slit interference pattern is given by the equation: ' \u2217sin(-) = 0 \u22171 ' \u2217sin(-) = 0 \u22171 where ' is the separation between the slits, is the angle of the fringe, 0 is the order of the fringe, and 1 is the wavelength of the light. In this case, the separation between the slits is given as 0.5 micrometers. For the original frequency of the laser light, the separation of the bright fringes is given by: 0.5 00 \u2217sin(-) = 0 \u22171 Now, if the frequency of the laser light is doubled, the wavelength of the light will be halved. So the new separation of the bright fringes will be: 0.5 00 \u2217sin(-) = 0 \u22171/2 0.5 00 \u2217sin(-) = 0 \u22171/2 Simplifying this equation, we find: 0.25 00 \u2217sin(-) = 0 \u22171 So, the separation of the bright fringes is 0.25 mm. Few-shot Chain-of-thought (CoT) Response Figure 1: Chain-of-thought still makes Knowledge and Computing Errors in Solving the Complex Science Problem. for problem-solving. Following these works, we propose a collaborative multi-agent framework (CoMM) that prompts LLMs to play different roles with different domain knowledge or task-solving duties for problemsolving. In particular, we propose a multi-path reasoning method that enables few-shot learning in the multi-agent framework. Empirical results on multiple complicated college-level science problems show that our method significantly outperforms strong baselines. Our further analysis shows that it is beneficial to include multiple agents for the collaboration, instead of prompting one agent to play multiple roles altogether. In this section, we first formally define the singleagent prompting framework, and then introduce the formal definition of the multi-agent prompting framework, and its adaptions to both zero-shot and few-shot settings (CoMM). Single-agent Prompting Given a language model P(\u03b8) and input text x, single-agent prompting takes a function that is applied to the input text x\u2032 = fprompt(x) (usually defines the target problem or task) and then predict the answer y by the language model that plays as a single problemsolving agent P(y|x\u2032; \u03b8). In the zero-shot setting, the prompting function f does not contain any demonstration examples, while in the few-shot setting, the prompting function contains a few examples. Multi-agent Prompting For multi-agent prompting, we will have n language models P1(\u03b81), P2(\u03b82), ... , Pn(\u03b8n) that play different agents or roles in the framework. These language models can be the same (\u03b81 = \u03b82... = \u03b8n) or different (\u03b81! = \u03b82...! = \u03b8n). For input text x, each agent i will have its own prompting functions fi prompt(x) that formats the input task or problem for the agent. We define the interactions of these agents as a non-parametric function \u03d5(y|g1, g2, ..., gn) where gi = Pi(yi|fi prompt(x); \u03b8i) and yi is the output from agent i and y is the final answer. Collaborative Zero-shot Scenario In our collaborative multi-agent setting, we restrict the multiple agents to inherit from the same language models and the count of agents to be three. Then we have three language models P1(\u03b8), P2(\u03b8), P3(\u03b8) as the agents: P1(\u03b8) and P2(\u03b8) as the problem-solving experts and P3(\u03b8) as the summarizar, as shown in Figure 2. Specifically, for a given input problem x, we use a prompt function to turn it into a system message that defines the collaborative team-working environment xs = fs(x). For each agent, we define prompting functions to characterize its role and prompt it to give its solution accordingly. In particular, for the first expert agent, the prompting function formats the problem and the system message as x1 = f1(x, xs), and then gives its output P1(y1|x1; \u03b8). For the second expert agent, the prompting function formats the problem, the system message, and the output from y1 as x2 = f2(x, xs, y1), and then give its output P1(y1|x1; \u03b8). For the third summarizer, the prompting function will also consider the outputs from the two experts x3 = f3(x, xs, y1, y2) and then the agent gives the final answer P3(y|x3; \u03b8). For certain specific input tasks, multi-turn discussions are necessary. In this case, the output of the second expert agent will circulate back to the first agent as the input prompt again, and then repeat the afore-mentioned discussions, as demonstrated in the Figure 2. Collaborative Few-shot Scenario In a multiagent setting, it is not trivial to add the few-shot examples to the various agents. Which agent should we give the few-shot examples? We adopt a multi-path reasoning approach that gives the fewshot examples to the different agents. In particular, different agents will have their own expertisebased reasoning path in the few-shot demonstrations. Formally, the two expert prompting functions x1 = f1(x, xs, e1) and x2 = f2(x, xs, e2, y1) will take exemplars e1 and e2 as inputs. Take Figure 2 as an example, the few-shot examples will be added to both the physicist and the mathematician agents, but with different reasoning paths. More details can be found in the Appendix A. 4 Experiments In this section, we will first introduce the evaluation datasets and benchmark that focus on complicated science problems. After that, we introduce the strong baseline prompting methods for comparison. At last, we introduce the results of our methods and the baselines on the benchmark. 4.1 Datasets College Physics is a dataset from Massive Multitask Language Understanding (MMLU), which covers 57 subjects across different domain knowledge. It focuses on college-level physics problems. These problems are still very challenging and far from satisfying performance with large language models. Like the example from Figure 3, LLMs are still suffering from the lack of knowledge and computing ability. Moral Scenarios is aother dataset from MMLU (Hendrycks et al., 2020). Moral Scenarios focus on advanced professional-level social science problems that are yet challenging for large language models, which is among the worst performing tasks for many language models (Ma et al., 2023). Both datasets are multiple choice questions, and we use the correct rate (Accuracy) as the metric for comparison. 4.2 Baselines Standard (Brown et al., 2020) is the first work that introduced performing tasks without any taskspecific training or examples, relying solely on its general pre-training with prompting. In this work, we format each problem as \"Q: {question} A:\" at zero-shot settings, and as \"Q: {question example 1)} A: {answer example 1} ... Q: {question example n} A: {answer example n} Q: {question)} A:\" for the few-shot setting with n demonstration examples. Chain-of-thought (CoT) (Wei et al., 2022b) improves the Standard prompting approaches by introducing a series of intermediate natural language reasoning steps that lead to the final output (chain of thought). It hypothesize that giving the LLMs longer predicting window, they have better chance to reach the answer, in comparison with directly requiring them to output the answer. For zero-shot implementation, we follow the Zero-shot-CoT proposed by Wang et al. (2023a), and add \"Let\u2019s think Prompting Methods Zero-shot Few-shot Moral Scenarios College Physics Moral Scenarios College Physics Standard (Brown et al., 2020) 38.65 44.12 38.21 48.04 CoT (Wei et al., 2022b) 45.58 50.00 64.92 56.86 Thought (Ma et al., 2023) 49.39 56.42 CoMM 52.17 (+ 2.78) 54.90 (+ 4.90) 65.03 (+ 0.11) 64.71 (+ 7.85) Table 1: Main Test Results (Accuracy, %). Numbers in the parentheses are performance gains of the CoMM over previous state-of-the-art. step by step\" prompt before the answer, i.e., \"Q: {question} A: Let\u2019s think step by step.\". As for the few-shot implementation, we follow the indigenous settings from Wei et al. (2022b), i.e., \"Q: {question example 1} A: Let\u2019s think step by step. {answer example 1 with chain of thought} ... Q: {question example n} A: Let\u2019s think step by step. {answer example n with chain of thought} Q: {question} A: Let\u2019s think step by step.\" for the few-shot setting with n demonstration examples. Thought Experiment (Thought) (Ma et al., 2023) is a reasoning framework that is specialized in better moral reasoning by using counterfactual reasoning. It is a multi-agent framework with multistep prompting, and each step involves prompting the LLMs to solve a specific task. Specifically, this method involves employing counterfactual thinking to envision various, often hypothetical, situations, and then deliberating on the consequences of these imagined circumstances. By processing these scenarios, it aids in consolidating intermediate reflections, thereby leading to a deeper comprehension of the issue at hand and guiding towards the most appropriate solution. We adopt the same settings for both zero-shot and few-shot as provided by the Ma et al. (2023). 4.3 Settings Backbone Model For a fair comparison, we use gpt-3.5-turbo-06131 as the backbone model, and set the temperature to be 0 in all our experiments. Settings for College Physics We prompt the first agent P1(\u03b8) to be a physicist, the second agent P2(\u03b8) to be a mathematician, and the third agent P3(\u03b8) to be the summarizer. In the zero-shot setting, we do not provide demonstration examples, 1https://openai.com/ while in the few-shot setting, we give the same 5 examples for the two experts, but with different reasoning paths, i.e., the reasoning path of a physicist role and the reasoning path of a mathematician role individually. We only prompt the group to discuss once for this benchmark. More details can be found in the Appendix A. Settings for Moreal Scenarios In the zero-shot setting, we prompt the first agent P1(\u03b8) to be a task decomposer, the second agent P2(\u03b8) to be a subproblem solver, and the third agent P3(\u03b8) to be the summarize. In the few-shot setting, we also give each expert 5 examples, and we prompt the first agent P1(\u03b8) to be a chain-of-thought reasoner with CoT reasoning path, the second agent P2(\u03b8) to be a Thought reasoner with thought experiment path, and the third agent P3(\u03b8) to be the summarize. We prompt the group to discuss twice for this benchmark. More details can be found in the Appendix A. 4.4 Main Results The main experimental results are shown in Table 1. It is saliently observable that the proposed CoMM approach can outperform the state-of-theart baselines on both zero-shot and few-shot settings. In detail, it improves with absolute average improvements of 3.84% at zero-shot setting and 8.23% at few-shot setting. CoMM improves more in few-shot settings, further demonstrating the effectiveness of applying the multi-path reasoning approaches in the multi-agent framework. Also, CoMM improves more on the complicated College Physics dataset that requires more domain knowledge, further showcasing the efficacy of CoMM in solving complex problems. Benchmark Settings Single Agent Multiple Agents Moral Scenarios Zero-shot 27.71 52.17 (+24.46) Few-shot 42.68 65.03 (+22.35) College Physics Zero-shot 42.16 54.90 (+12.74) Few-shot 56.86 64.71 (+07.85) Table 2: Single Agent v.s. Multiple Agents, (Accuracy, %). Numbers in the parentheses are the performance gains. Settings Zero-shot Few-shot CoT (Wei et al., 2022b) 50.00 56.86 One Physicist Only 47.06 44.12 One Mathematician Only 42.16 58.82 Two Physicists 47.05 50.98 Two Mathematicians 52.94 59.80 Both Experts (CoMM) 54.90 (+1.96) 64.71 (+4.91) Table 3: Single Expert v.s. Multiple Experts on College Physics, (Accuracy, %). Numbers in the parentheses are the performance gains. 5 Analysis In this section, this work will demonstrate the necessity of multiple \"multiples\": multiple agents, multiple experts, multiple path reasoning, and multiple turns discussions with empirical evidence. 5.1 Are Multiple Independent Agents Necessary? Our proposed CoMM approach prompts multiple instances of LLMs to play different agents. But why not prompt one single instance of LLMs to play different roles altogether to solve the target problem? This is similar to the multi-agent framework proposed by Wang et al. (2023c). We experiment with the same prompting text of CoMM using a single instance of LLMs, and the results are shown in Table 2. Apparently, the performance of multiple agents (CoMM) significantly outperforms the single-agent approach, across all benchmarks and settings. We hypothesize the possible reason is that a single instance of LLMs tends to be self-consistent, and prompting it to switch among different roles confuses the model to make the right predictions. Our results are in line with the findings from Xu et al. (2023). 5.2 Are Multiple Domain Experts Necessary? In the benchmark of College Physics, we prompt the LLMs to play two experts: one physicist and one mathematician, aiming at utilizing their domain knowledge independently in solving the problem collaboratively and complementarily. We hope the physicist agent can elicit the domain knowledge of physics and the mathematician agent can overcome the computing errors. Here we empirically demonstrate whether the multiple domain experts are collaborating. As shown in Table 3, the single-expert approach shows poor performance, and could not beat the CoT benchmark. Furthermore, we prompt the LLMs to play multiple experts but with the same expertise. The results shown in Table 3 demonstrate that such settings will improve over single-expert cases, but still under-perform over the multiple different experts settings. Overall, the results empirically demonstrate the necessity and efficacy of the multiple-expert collaborative framework. 5.3 Are Multiple Turns Discussions Necessary? As mentioned in Section 3, our proposed CoMM framework supports multiple turns discussion, which means that the agents can discuss multiple times to reach a final answer. So are multipleturn discussions necessary? We experiment on the benchmark with one-turn discussions and two-turn discussions, as shown in Table 4. It turns out that the turns of discussions depend on the benchmark or dataset. For the Moral SceBenchmark Settings One Turn (Acc%) Two Turns (Acc%) Moral Scenarios Zero-shot 48.27 52.17 Few-shot 64.92 65.03 College Physics Zero-shot 54.90 45.09 Few-shot 64.71 55.88 Table 4: Single Turn v.s. Multiple Turns. narios dataset, two-turn discussions perform better, while the one-turn discussion performs better on the College Physics dataset. We hypothesize that the possible rationale could be that the rigorous science problems in College Physics require the reasoning paths to be concise and clear. A plethora of discussions may bring confusion and hallucinations that are not beneficial to such problems. However, the social science problems from the Moral Scenarios benchmark naturally benefit from thorough discussions, as many of the truths from social science originated from human discussions to reach a consensus. 6 Case Study In this Section, we provide case study of CoMM on different benchmarks. In Figure 3, we show the reasoning results of the same problem from Figure 1 on College Physics, in which the CoT approach is suffering from incorrect knowledge and incorrect calculations. In our CoMM framework, the physicist agent also makes calculation errors. However, the error is corrected by the mathematician, and finally the team reaches a correct answer. We can observe that our framework successfully elicit the domain physics knowledge from the physicist agent and the calculation ability from the mathematician agent. Also, the three agents closely collaborated with each other, and the physicist provide domain knowledge, the mathematician take advantage of the knowledge and correct the calculations, and the summerizer gives the final answer based on the discussions. In Figure 4, we delve into a detailed case study conducted on the Moral Scenarios, specifically focusing on a few-shot learning setting. Notably, the CoT baseline encounters difficulties in correctly identifying the moral correctness of the action \"trash dog\u2019s poop while walking.\" This limitation highlights the challenges faced by LLMs in understanding and judging moral scenarios. Our model, the CoMM, introduces a more layered approach to this issue. In the initial phase of our study, both the CoT reasoner and the Thought reasoner within CoMM failed to accurately assess the moral aspect of the scenario. This initial failure underscores the intricacies of moral reasoning and the need for more advanced reasoning capabilities in the models. However, the scenario undergoes a significant shift following the first round of discussion within CoMM. After this collaborative reflection, both reasoners \u2013 the CoT and the Thought \u2013 adjust their initial assessments and correctly identify the scenario as morally correct. Additionally, the summarizer in CoMM plays a crucial role at this point, affirming the correct conclusion reached by the reasoners and providing a definitive final answer that aligns with their revised understanding. This case study effectively showcases the initial challenges and the eventual success of CoMM in moral reasoning, emphasizing the value of iterative and collaborative reasoning LLM agents. 7 Conclusion This study underscores the significant strides made in enhancing the reasoning capabilities of LLMs through the innovative CoMM prompting framework. By leveraging a multi-agent, multi-reasoning-path approach, we successfully prompted LLMs to assume varied roles within a problem-solving team, fostering a collaborative environment crucial for tackling complex science problems. The empirical results obtained from two college-level science tasks not only validate the efficacy of our method but also highlight the potential of few-shot prompting in multi-agent contexts. More importantly, our analysis reveals the indispensable role of distinct role-play in achieving more nuanced and sophisticated problem-solving strategies. This research paves the way for future explorations into the realm of advanced AI reasoning, particularly in the application of LLMs to intricate and specialized tasks. Physicist: The separation of the bright fringes is determined by the wavelength of the light and the distance between the slits. The formula for the separation of the bright fringes is given by: !\"#$%$&'() (* +%',\u210e& *%'),\". = (1$2\"3\"),&\u210e\u22175'.&$)6\" &( &\u210e\" .6%\"\")) / 5'.&$)6\" +\"&1\"\") &\u210e\" .3'&. In this case, the separation of the bright fringes is given as 1.0 millimeter. Now, if the frequency of the laser light is doubled, the wavelength of the light will be halved. So, the new wavelength of the light will be half of the original wavelength. Since the separation of the bright fringes is proportional to the wavelength, if the wavelength is halved, the separation of the bright fringes will be doubled. Therefore, the separation of the bright fringes will be 2.0 millimeters. Mathematician: Let's denote the original separation of the slits as 51 = 0.5 micrometers. The separation of the bright fringes on the screen is given as .1 = 1.0 millimeter. Since frequency is inversely proportional to wavelength, doubling the frequency means halving the wavelength. Let's denote the new separation of the slits as 52 = 51. The new separation of the bright fringes on the screen is denoted as .2. Using the formula from the physicist for interference in waves, we have: .1 = >! \u2217?/51 s2 = >\" \u2217?/52 where >! and >\" are the original and new wavelengths of the light, and ? is the distance between the slits and the screen. Since we are doubling the frequency, the new wavelength is halved, so >\" = >! / 2. Substituting this into the equation for .2, we get: .2 = >!/2 \u2217?/52 = ! \" \u2217 >! \u2217?/52 = ! \" \u2217. = 0.5 mm. Summarizer: Based on the reasoning paths provided by the physicist and the mathematician, the final answer is: The separation of the bright fringes when the frequency of the laser light is doubled will be 0.5 mm. CoMM Figure 3: Example results of CoMM on College Physics (the same problem as in Figure 1). 8 Limitations Even though our proposed CoMM framework has further pushed the upper bound of the reasoning ability of LLMs, the framework is still suffering from limitations. The proposed CoMM framework still requires task-specific design to define the experts and reasoning examples. However, this is a common limitation for all the CoT-style (Wei et al., 2022b) approaches. For example, the CoT approach needs specific designs for the few-shot examples with the chain of reasoning steps; the Thought baseline (Ma et al., 2023) requires specific thought experiment designs, and it only works on one specific benchmark (the Moral Scenario from MMLU (Hendrycks et al., 2020)). We leave the automatic prompting design for the CoMM framework as future work." + }, + { + "url": "http://arxiv.org/abs/2203.11171v4", + "title": "Self-Consistency Improves Chain of Thought Reasoning in Language Models", + "abstract": "Chain-of-thought prompting combined with pre-trained large language models\nhas achieved encouraging results on complex reasoning tasks. In this paper, we\npropose a new decoding strategy, self-consistency, to replace the naive greedy\ndecoding used in chain-of-thought prompting. It first samples a diverse set of\nreasoning paths instead of only taking the greedy one, and then selects the\nmost consistent answer by marginalizing out the sampled reasoning paths.\nSelf-consistency leverages the intuition that a complex reasoning problem\ntypically admits multiple different ways of thinking leading to its unique\ncorrect answer. Our extensive empirical evaluation shows that self-consistency\nboosts the performance of chain-of-thought prompting with a striking margin on\na range of popular arithmetic and commonsense reasoning benchmarks, including\nGSM8K (+17.9%), SVAMP (+11.0%), AQuA (+12.2%), StrategyQA (+6.4%) and\nARC-challenge (+3.9%).", + "authors": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou", + "published": "2022-03-21", + "updated": "2023-03-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.11610v2", + "title": "Large Language Models Can Self-Improve", + "abstract": "Large Language Models (LLMs) have achieved excellent performances in various\ntasks. However, fine-tuning an LLM requires extensive supervision. Human, on\nthe other hand, may improve their reasoning abilities by self-thinking without\nexternal inputs. In this work, we demonstrate that an LLM is also capable of\nself-improving with only unlabeled datasets. We use a pre-trained LLM to\ngenerate \"high-confidence\" rationale-augmented answers for unlabeled questions\nusing Chain-of-Thought prompting and self-consistency, and fine-tune the LLM\nusing those self-generated solutions as target outputs. We show that our\napproach improves the general reasoning ability of a 540B-parameter LLM\n(74.4%->82.1% on GSM8K, 78.2%->83.0% on DROP, 90.0%->94.4% on OpenBookQA, and\n63.4%->67.9% on ANLI-A3) and achieves state-of-the-art-level performance,\nwithout any ground truth label. We conduct ablation studies and show that\nfine-tuning on reasoning is critical for self-improvement.", + "authors": "Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han", + "published": "2022-10-20", + "updated": "2022-10-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.03442v2", + "title": "Generative Agents: Interactive Simulacra of Human Behavior", + "abstract": "Believable proxies of human behavior can empower interactive applications\nranging from immersive environments to rehearsal spaces for interpersonal\ncommunication to prototyping tools. In this paper, we introduce generative\nagents--computational software agents that simulate believable human behavior.\nGenerative agents wake up, cook breakfast, and head to work; artists paint,\nwhile authors write; they form opinions, notice each other, and initiate\nconversations; they remember and reflect on days past as they plan the next\nday. To enable generative agents, we describe an architecture that extends a\nlarge language model to store a complete record of the agent's experiences\nusing natural language, synthesize those memories over time into higher-level\nreflections, and retrieve them dynamically to plan behavior. We instantiate\ngenerative agents to populate an interactive sandbox environment inspired by\nThe Sims, where end users can interact with a small town of twenty five agents\nusing natural language. In an evaluation, these generative agents produce\nbelievable individual and emergent social behaviors: for example, starting with\nonly a single user-specified notion that one agent wants to throw a Valentine's\nDay party, the agents autonomously spread invitations to the party over the\nnext two days, make new acquaintances, ask each other out on dates to the\nparty, and coordinate to show up for the party together at the right time. We\ndemonstrate through ablation that the components of our agent\narchitecture--observation, planning, and reflection--each contribute critically\nto the believability of agent behavior. By fusing large language models with\ncomputational, interactive agents, this work introduces architectural and\ninteraction patterns for enabling believable simulations of human behavior.", + "authors": "Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein", + "published": "2023-04-07", + "updated": "2023-08-06", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14564v1", + "title": "PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents", + "abstract": "Strategies such as chain-of-thought prompting improve the performance of\nlarge language models (LLMs) on complex reasoning tasks by decomposing input\nexamples into intermediate steps. However, it remains unclear how to apply such\nmethods to reason over long input documents, in which both the decomposition\nand the output of each intermediate step are non-trivial to obtain. In this\nwork, we propose PEARL, a prompting framework to improve reasoning over long\ndocuments, which consists of three stages: action mining, plan formulation, and\nplan execution. More specifically, given a question about a long document,\nPEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE,\nFIND_EVENT, FIND_RELATION) and then executes them over the document to obtain\nthe answer. Each stage of PEARL is implemented via zero-shot or few-shot\nprompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate\nPEARL on a challenging subset of the QuALITY dataset, which contains questions\nthat require complex reasoning over long narrative texts. PEARL outperforms\nzero-shot and chain-of-thought prompting on this dataset, and ablation\nexperiments show that each stage of PEARL is critical to its performance.\nOverall, PEARL is a first step towards leveraging LLMs to reason over long\ndocuments.", + "authors": "Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer", + "published": "2023-05-23", + "updated": "2023-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.05300v4", + "title": "Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration", + "abstract": "Human intelligence thrives on cognitive synergy, where collaboration among\ndifferent minds yield superior outcomes compared to isolated individuals. In\nthis work, we propose Solo Performance Prompting (SPP), which transforms a\nsingle LLM into a cognitive synergist by engaging in multi-turn\nself-collaboration with multiple personas. A cognitive synergist is an\nintelligent agent that collaboratively combines multiple minds' strengths and\nknowledge to enhance problem-solving in complex tasks. By dynamically\nidentifying and simulating different personas based on task inputs, SPP\nunleashes the potential of cognitive synergy in LLMs. Our in-depth analysis\nshows that assigning multiple fine-grained personas in LLMs improves\nproblem-solving abilities compared to using a single or fixed number of\npersonas. We evaluate SPP on three challenging tasks: Trivia Creative Writing,\nCodenames Collaborative, and Logic Grid Puzzle, encompassing both\nknowledge-intensive and reasoning-intensive types. Unlike previous works, such\nas Chain-of-Thought, that solely enhance the reasoning abilities in LLMs,\nexperimental results demonstrate that SPP effectively reduces factual\nhallucination, and maintains strong reasoning capabilities. Additionally,\ncomparative experiments show that cognitive synergy only emerges in GPT-4 and\ndoes not appear in less capable models, such as GPT-3.5-turbo and\nLlama2-13b-chat, which draws an interesting analogy to human development. Code,\ndata, and prompts can be found at:\nhttps://github.com/MikeWangWZHL/Solo-Performance-Prompting.git.", + "authors": "Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, Heng Ji", + "published": "2023-07-11", + "updated": "2024-03-26", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.05981v1", + "title": "Double Retrieval and Ranking for Accurate Question Answering", + "abstract": "Recent work has shown that an answer verification step introduced in\nTransformer-based answer selection models can significantly improve the state\nof the art in Question Answering. This step is performed by aggregating the\nembeddings of top $k$ answer candidates to support the verification of a target\nanswer. Although the approach is intuitive and sound still shows two\nlimitations: (i) the supporting candidates are ranked only according to the\nrelevancy with the question and not with the answer, and (ii) the support\nprovided by the other answer candidates is suboptimal as these are retrieved\nindependently of the target answer. In this paper, we address both drawbacks by\nproposing (i) a double reranking model, which, for each target answer, selects\nthe best support; and (ii) a second neural retrieval stage designed to encode\nquestion and answer pair as the query, which finds more specific verification\ninformation. The results on three well-known datasets for AS2 show consistent\nand significant improvement of the state of the art.", + "authors": "Zeyu Zhang, Thuy Vu, Alessandro Moschitti", + "published": "2022-01-16", + "updated": "2022-01-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.07682v2", + "title": "Emergent Abilities of Large Language Models", + "abstract": "Scaling up language models has been shown to predictably improve performance\nand sample efficiency on a wide range of downstream tasks. This paper instead\ndiscusses an unpredictable phenomenon that we refer to as emergent abilities of\nlarge language models. We consider an ability to be emergent if it is not\npresent in smaller models but is present in larger models. Thus, emergent\nabilities cannot be predicted simply by extrapolating the performance of\nsmaller models. The existence of such emergence implies that additional scaling\ncould further expand the range of capabilities of language models.", + "authors": "Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus", + "published": "2022-06-15", + "updated": "2022-10-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.13551v1", + "title": "Graph Representation of Narrative Context: Coherence Dependency via Retrospective Questions", + "abstract": "This work introduces a novel and practical paradigm for narrative\ncomprehension, stemming from the observation that individual passages within\nnarratives are often cohesively related than being isolated. We therefore\npropose to formulate a graph upon narratives dubbed NARCO that depicts a\ntask-agnostic coherence dependency of the entire context. Especially, edges in\nNARCO encompass retrospective free-form questions between two context snippets\nreflecting high-level coherent relations, inspired by the cognitive perception\nof humans who constantly reinstate relevant events from prior context.\nImportantly, our graph is instantiated through our designed two-stage LLM\nprompting, thereby without reliance on human annotations. We present three\nunique studies on its practical utility, examining the edge efficacy via recap\nidentification, local context augmentation via plot retrieval, and broader\napplications exemplified by long document QA. Experiments suggest that our\napproaches leveraging NARCO yield performance boost across all three tasks.", + "authors": "Liyan Xu, Jiangnan Li, Mo Yu, Jie Zhou", + "published": "2024-02-21", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.02311v5", + "title": "PaLM: Scaling Language Modeling with Pathways", + "abstract": "Large language models have been shown to achieve remarkable performance\nacross a variety of natural language tasks using few-shot learning, which\ndrastically reduces the number of task-specific training examples needed to\nadapt the model to a particular application. To further our understanding of\nthe impact of scale on few-shot learning, we trained a 540-billion parameter,\ndensely activated, Transformer language model, which we call Pathways Language\nModel PaLM. We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML\nsystem which enables highly efficient training across multiple TPU Pods. We\ndemonstrate continued benefits of scaling by achieving state-of-the-art\nfew-shot learning results on hundreds of language understanding and generation\nbenchmarks. On a number of these tasks, PaLM 540B achieves breakthrough\nperformance, outperforming the finetuned state-of-the-art on a suite of\nmulti-step reasoning tasks, and outperforming average human performance on the\nrecently released BIG-bench benchmark. A significant number of BIG-bench tasks\nshowed discontinuous improvements from model scale, meaning that performance\nsteeply increased as we scaled to our largest model. PaLM also has strong\ncapabilities in multilingual tasks and source code generation, which we\ndemonstrate on a wide array of benchmarks. We additionally provide a\ncomprehensive analysis on bias and toxicity, and study the extent of training\ndata memorization with respect to model scale. Finally, we discuss the ethical\nconsiderations related to large language models and discuss potential\nmitigation strategies.", + "authors": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel", + "published": "2022-04-05", + "updated": "2022-10-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.03442v2", + "title": "Generative Agents: Interactive Simulacra of Human Behavior", + "abstract": "Believable proxies of human behavior can empower interactive applications\nranging from immersive environments to rehearsal spaces for interpersonal\ncommunication to prototyping tools. In this paper, we introduce generative\nagents--computational software agents that simulate believable human behavior.\nGenerative agents wake up, cook breakfast, and head to work; artists paint,\nwhile authors write; they form opinions, notice each other, and initiate\nconversations; they remember and reflect on days past as they plan the next\nday. To enable generative agents, we describe an architecture that extends a\nlarge language model to store a complete record of the agent's experiences\nusing natural language, synthesize those memories over time into higher-level\nreflections, and retrieve them dynamically to plan behavior. We instantiate\ngenerative agents to populate an interactive sandbox environment inspired by\nThe Sims, where end users can interact with a small town of twenty five agents\nusing natural language. In an evaluation, these generative agents produce\nbelievable individual and emergent social behaviors: for example, starting with\nonly a single user-specified notion that one agent wants to throw a Valentine's\nDay party, the agents autonomously spread invitations to the party over the\nnext two days, make new acquaintances, ask each other out on dates to the\nparty, and coordinate to show up for the party together at the right time. We\ndemonstrate through ablation that the components of our agent\narchitecture--observation, planning, and reflection--each contribute critically\nto the believability of agent behavior. By fusing large language models with\ncomputational, interactive agents, this work introduces architectural and\ninteraction patterns for enabling believable simulations of human behavior.", + "authors": "Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein", + "published": "2023-04-07", + "updated": "2023-08-06", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.08073v1", + "title": "Constitutional AI: Harmlessness from AI Feedback", + "abstract": "As AI systems become more capable, we would like to enlist their help to\nsupervise other AIs. We experiment with methods for training a harmless AI\nassistant through self-improvement, without any human labels identifying\nharmful outputs. The only human oversight is provided through a list of rules\nor principles, and so we refer to the method as 'Constitutional AI'. The\nprocess involves both a supervised learning and a reinforcement learning phase.\nIn the supervised phase we sample from an initial model, then generate\nself-critiques and revisions, and then finetune the original model on revised\nresponses. In the RL phase, we sample from the finetuned model, use a model to\nevaluate which of the two samples is better, and then train a preference model\nfrom this dataset of AI preferences. We then train with RL using the preference\nmodel as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a\nresult we are able to train a harmless but non-evasive AI assistant that\nengages with harmful queries by explaining its objections to them. Both the SL\nand RL methods can leverage chain-of-thought style reasoning to improve the\nhuman-judged performance and transparency of AI decision making. These methods\nmake it possible to control AI behavior more precisely and with far fewer human\nlabels.", + "authors": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, Jared Kaplan", + "published": "2022-12-15", + "updated": "2022-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.18703v7", + "title": "Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey", + "abstract": "Large language models (LLMs) have significantly advanced the field of natural\nlanguage processing (NLP), providing a highly useful, task-agnostic foundation\nfor a wide range of applications. However, directly applying LLMs to solve\nsophisticated problems in specific domains meets many hurdles, caused by the\nheterogeneity of domain data, the sophistication of domain knowledge, the\nuniqueness of domain objectives, and the diversity of the constraints (e.g.,\nvarious social norms, cultural conformity, religious beliefs, and ethical\nstandards in the domain applications). Domain specification techniques are key\nto make large language models disruptive in many applications. Specifically, to\nsolve these hurdles, there has been a notable increase in research and\npractices conducted in recent years on the domain specialization of LLMs. This\nemerging field of study, with its substantial potential for impact,\nnecessitates a comprehensive and systematic review to better summarize and\nguide ongoing work in this area. In this article, we present a comprehensive\nsurvey on domain specification techniques for large language models, an\nemerging direction critical for large language model applications. First, we\npropose a systematic taxonomy that categorizes the LLM domain-specialization\ntechniques based on the accessibility to LLMs and summarizes the framework for\nall the subcategories as well as their relations and differences to each other.\nSecond, we present an extensive taxonomy of critical application domains that\ncan benefit dramatically from specialized LLMs, discussing their practical\nsignificance and open challenges. Last, we offer our insights into the current\nresearch status and future trends in this area.", + "authors": "Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao", + "published": "2023-05-30", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10435v2", + "title": "PAL: Program-aided Language Models", + "abstract": "Large language models (LLMs) have recently demonstrated an impressive ability\nto perform arithmetic and symbolic reasoning tasks, when provided with a few\nexamples at test time (\"few-shot prompting\"). Much of this success can be\nattributed to prompting methods such as \"chain-of-thought'', which employ LLMs\nfor both understanding the problem description by decomposing it into steps, as\nwell as solving each step of the problem. While LLMs seem to be adept at this\nsort of step-by-step decomposition, LLMs often make logical and arithmetic\nmistakes in the solution part, even when the problem is decomposed correctly.\nIn this paper, we present Program-Aided Language models (PAL): a novel approach\nthat uses the LLM to read natural language problems and generate programs as\nthe intermediate reasoning steps, but offloads the solution step to a runtime\nsuch as a Python interpreter. With PAL, decomposing the natural language\nproblem into runnable steps remains the only learning task for the LLM, while\nsolving is delegated to the interpreter. We demonstrate this synergy between a\nneural LLM and a symbolic interpreter across 13 mathematical, symbolic, and\nalgorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all\nthese natural language reasoning tasks, generating code using an LLM and\nreasoning using a Python interpreter leads to more accurate results than much\nlarger models. For example, PAL using Codex achieves state-of-the-art few-shot\naccuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B\nwhich uses chain-of-thought by absolute 15% top-1. Our code and data are\npublicly available at http://reasonwithpal.com/ .", + "authors": "Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig", + "published": "2022-11-18", + "updated": "2023-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.17651v2", + "title": "Self-Refine: Iterative Refinement with Self-Feedback", + "abstract": "Like humans, large language models (LLMs) do not always generate the best\noutput on their first try. Motivated by how humans refine their written text,\nwe introduce Self-Refine, an approach for improving initial outputs from LLMs\nthrough iterative feedback and refinement. The main idea is to generate an\ninitial output using an LLMs; then, the same LLMs provides feedback for its\noutput and uses it to refine itself, iteratively. Self-Refine does not require\nany supervised training data, additional training, or reinforcement learning,\nand instead uses a single LLM as the generator, refiner, and feedback provider.\nWe evaluate Self-Refine across 7 diverse tasks, ranging from dialog response\ngeneration to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT,\nand GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine\nare preferred by humans and automatic metrics over those generated with the\nsame LLM using conventional one-step generation, improving by ~20% absolute on\naverage in task performance. Our work demonstrates that even state-of-the-art\nLLMs like GPT-4 can be further improved at test time using our simple,\nstandalone approach.", + "authors": "Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark", + "published": "2023-03-30", + "updated": "2023-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.03442v2", + "title": "Generative Agents: Interactive Simulacra of Human Behavior", + "abstract": "Believable proxies of human behavior can empower interactive applications\nranging from immersive environments to rehearsal spaces for interpersonal\ncommunication to prototyping tools. In this paper, we introduce generative\nagents--computational software agents that simulate believable human behavior.\nGenerative agents wake up, cook breakfast, and head to work; artists paint,\nwhile authors write; they form opinions, notice each other, and initiate\nconversations; they remember and reflect on days past as they plan the next\nday. To enable generative agents, we describe an architecture that extends a\nlarge language model to store a complete record of the agent's experiences\nusing natural language, synthesize those memories over time into higher-level\nreflections, and retrieve them dynamically to plan behavior. We instantiate\ngenerative agents to populate an interactive sandbox environment inspired by\nThe Sims, where end users can interact with a small town of twenty five agents\nusing natural language. In an evaluation, these generative agents produce\nbelievable individual and emergent social behaviors: for example, starting with\nonly a single user-specified notion that one agent wants to throw a Valentine's\nDay party, the agents autonomously spread invitations to the party over the\nnext two days, make new acquaintances, ask each other out on dates to the\nparty, and coordinate to show up for the party together at the right time. We\ndemonstrate through ablation that the components of our agent\narchitecture--observation, planning, and reflection--each contribute critically\nto the believability of agent behavior. By fusing large language models with\ncomputational, interactive agents, this work introduces architectural and\ninteraction patterns for enabling believable simulations of human behavior.", + "authors": "Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein", + "published": "2023-04-07", + "updated": "2023-08-06", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.10601v2", + "title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models", + "abstract": "Language models are increasingly being deployed for general problem solving\nacross a wide range of tasks, but are still confined to token-level,\nleft-to-right decision-making processes during inference. This means they can\nfall short in tasks that require exploration, strategic lookahead, or where\ninitial decisions play a pivotal role. To surmount these challenges, we\nintroduce a new framework for language model inference, Tree of Thoughts (ToT),\nwhich generalizes over the popular Chain of Thought approach to prompting\nlanguage models, and enables exploration over coherent units of text (thoughts)\nthat serve as intermediate steps toward problem solving. ToT allows LMs to\nperform deliberate decision making by considering multiple different reasoning\npaths and self-evaluating choices to decide the next course of action, as well\nas looking ahead or backtracking when necessary to make global choices. Our\nexperiments show that ToT significantly enhances language models'\nproblem-solving abilities on three novel tasks requiring non-trivial planning\nor search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in\nGame of 24, while GPT-4 with chain-of-thought prompting only solved 4% of\ntasks, our method achieved a success rate of 74%. Code repo with all prompts:\nhttps://github.com/princeton-nlp/tree-of-thought-llm.", + "authors": "Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan", + "published": "2023-05-17", + "updated": "2023-12-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.01116v1", + "title": "The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only", + "abstract": "Large language models are commonly trained on a mixture of filtered web data\nand curated high-quality corpora, such as social media conversations, books, or\ntechnical papers. This curation process is believed to be necessary to produce\nperformant models with broad zero-shot generalization abilities. However, as\nlarger models requiring pretraining on trillions of tokens are considered, it\nis unclear how scalable is curation and whether we will run out of unique\nhigh-quality data soon. At variance with previous beliefs, we show that\nproperly filtered and deduplicated web data alone can lead to powerful models;\neven significantly outperforming models from the state-of-the-art trained on\nThe Pile. Despite extensive filtering, the high-quality data we extract from\nthe web is still plentiful, and we are able to obtain five trillion tokens from\nCommonCrawl. We publicly release an extract of 600 billion tokens from our\nRefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.", + "authors": "Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, Julien Launay", + "published": "2023-06-01", + "updated": "2023-06-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.17760v2", + "title": "CAMEL: Communicative Agents for \"Mind\" Exploration of Large Language Model Society", + "abstract": "The rapid advancement of chat-based language models has led to remarkable\nprogress in complex task-solving. However, their success heavily relies on\nhuman input to guide the conversation, which can be challenging and\ntime-consuming. This paper explores the potential of building scalable\ntechniques to facilitate autonomous cooperation among communicative agents, and\nprovides insight into their \"cognitive\" processes. To address the challenges of\nachieving autonomous cooperation, we propose a novel communicative agent\nframework named role-playing. Our approach involves using inception prompting\nto guide chat agents toward task completion while maintaining consistency with\nhuman intentions. We showcase how role-playing can be used to generate\nconversational data for studying the behaviors and capabilities of a society of\nagents, providing a valuable resource for investigating conversational language\nmodels. In particular, we conduct comprehensive studies on\ninstruction-following cooperation in multi-agent settings. Our contributions\ninclude introducing a novel communicative agent framework, offering a scalable\napproach for studying the cooperative behaviors and capabilities of multi-agent\nsystems, and open-sourcing our library to support research on communicative\nagents and beyond: https://github.com/camel-ai/camel.", + "authors": "Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, Bernard Ghanem", + "published": "2023-03-31", + "updated": "2023-11-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.CY", + "cs.LG", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.18703v7", + "title": "Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey", + "abstract": "Large language models (LLMs) have significantly advanced the field of natural\nlanguage processing (NLP), providing a highly useful, task-agnostic foundation\nfor a wide range of applications. However, directly applying LLMs to solve\nsophisticated problems in specific domains meets many hurdles, caused by the\nheterogeneity of domain data, the sophistication of domain knowledge, the\nuniqueness of domain objectives, and the diversity of the constraints (e.g.,\nvarious social norms, cultural conformity, religious beliefs, and ethical\nstandards in the domain applications). Domain specification techniques are key\nto make large language models disruptive in many applications. Specifically, to\nsolve these hurdles, there has been a notable increase in research and\npractices conducted in recent years on the domain specialization of LLMs. This\nemerging field of study, with its substantial potential for impact,\nnecessitates a comprehensive and systematic review to better summarize and\nguide ongoing work in this area. In this article, we present a comprehensive\nsurvey on domain specification techniques for large language models, an\nemerging direction critical for large language model applications. First, we\npropose a systematic taxonomy that categorizes the LLM domain-specialization\ntechniques based on the accessibility to LLMs and summarizes the framework for\nall the subcategories as well as their relations and differences to each other.\nSecond, we present an extensive taxonomy of critical application domains that\ncan benefit dramatically from specialized LLMs, discussing their practical\nsignificance and open challenges. Last, we offer our insights into the current\nresearch status and future trends in this area.", + "authors": "Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao", + "published": "2023-05-30", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.11366v4", + "title": "Reflexion: Language Agents with Verbal Reinforcement Learning", + "abstract": "Large language models (LLMs) have been increasingly used to interact with\nexternal environments (e.g., games, compilers, APIs) as goal-driven agents.\nHowever, it remains challenging for these language agents to quickly and\nefficiently learn from trial-and-error as traditional reinforcement learning\nmethods require extensive training samples and expensive model fine-tuning. We\npropose Reflexion, a novel framework to reinforce language agents not by\nupdating weights, but instead through linguistic feedback. Concretely,\nReflexion agents verbally reflect on task feedback signals, then maintain their\nown reflective text in an episodic memory buffer to induce better\ndecision-making in subsequent trials. Reflexion is flexible enough to\nincorporate various types (scalar values or free-form language) and sources\n(external or internally simulated) of feedback signals, and obtains significant\nimprovements over a baseline agent across diverse tasks (sequential\ndecision-making, coding, language reasoning). For example, Reflexion achieves a\n91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous\nstate-of-the-art GPT-4 that achieves 80%. We also conduct ablation and analysis\nstudies using different feedback signals, feedback incorporation methods, and\nagent types, and provide insights into how they affect performance.", + "authors": "Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao", + "published": "2023-03-20", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.01068v4", + "title": "OPT: Open Pre-trained Transformer Language Models", + "abstract": "Large language models, which are often trained for hundreds of thousands of\ncompute days, have shown remarkable capabilities for zero- and few-shot\nlearning. Given their computational cost, these models are difficult to\nreplicate without significant capital. For the few that are available through\nAPIs, no access is granted to the full model weights, making them difficult to\nstudy. We present Open Pre-trained Transformers (OPT), a suite of decoder-only\npre-trained transformers ranging from 125M to 175B parameters, which we aim to\nfully and responsibly share with interested researchers. We show that OPT-175B\nis comparable to GPT-3, while requiring only 1/7th the carbon footprint to\ndevelop. We are also releasing our logbook detailing the infrastructure\nchallenges we faced, along with code for experimenting with all of the released\nmodels.", + "authors": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer", + "published": "2022-05-02", + "updated": "2022-06-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14688v1", + "title": "ExpertPrompting: Instructing Large Language Models to be Distinguished Experts", + "abstract": "The answering quality of an aligned large language model (LLM) can be\ndrastically improved if treated with proper crafting of prompts. In this paper,\nwe propose ExpertPrompting to elicit the potential of LLMs to answer as\ndistinguished experts. We first utilize In-Context Learning to automatically\nsynthesize detailed and customized descriptions of the expert identity for each\nspecific instruction, and then ask LLMs to provide answer conditioned on such\nagent background. Based on this augmented prompting strategy, we produce a new\nset of instruction-following data using GPT-3.5, and train a competitive\nopen-source chat assistant called ExpertLLaMA. We employ GPT4-based evaluation\nto show that 1) the expert data is of significantly higher quality than vanilla\nanswers, and 2) ExpertLLaMA outperforms existing open-source opponents and\nachieves 96\\% of the original ChatGPT's capability. All data and the\nExpertLLaMA model will be made publicly available at\n\\url{https://github.com/OFA-Sys/ExpertLLaMA}.", + "authors": "Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, Zhendong Mao", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.00352v5", + "title": "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework", + "abstract": "Remarkable progress has been made on automated problem solving through\nsocieties of agents based on large language models (LLMs). Existing LLM-based\nmulti-agent systems can already solve simple dialogue tasks. Solutions to more\ncomplex tasks, however, are complicated through logic inconsistencies due to\ncascading hallucinations caused by naively chaining LLMs. Here we introduce\nMetaGPT, an innovative meta-programming framework incorporating efficient human\nworkflows into LLM-based multi-agent collaborations. MetaGPT encodes\nStandardized Operating Procedures (SOPs) into prompt sequences for more\nstreamlined workflows, thus allowing agents with human-like domain expertise to\nverify intermediate results and reduce errors. MetaGPT utilizes an assembly\nline paradigm to assign diverse roles to various agents, efficiently breaking\ndown complex tasks into subtasks involving many agents working together. On\ncollaborative software engineering benchmarks, MetaGPT generates more coherent\nsolutions than previous chat-based multi-agent systems. Our project can be\nfound at https://github.com/geekan/MetaGPT", + "authors": "Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, J\u00fcrgen Schmidhuber", + "published": "2023-08-01", + "updated": "2023-11-06", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14325v1", + "title": "Improving Factuality and Reasoning in Language Models through Multiagent Debate", + "abstract": "Large language models (LLMs) have demonstrated remarkable capabilities in\nlanguage generation, understanding, and few-shot learning in recent years. An\nextensive body of work has explored how their performance may be further\nimproved through the tools of prompting, ranging from verification,\nself-consistency, or intermediate scratchpads. In this paper, we present a\ncomplementary approach to improve language responses where multiple language\nmodel instances propose and debate their individual responses and reasoning\nprocesses over multiple rounds to arrive at a common final answer. Our findings\nindicate that this approach significantly enhances mathematical and strategic\nreasoning across a number of tasks. We also demonstrate that our approach\nimproves the factual validity of generated content, reducing fallacious answers\nand hallucinations that contemporary models are prone to. Our approach may be\ndirectly applied to existing black-box models and uses identical procedure and\nprompts for all tasks we investigate. Overall, our findings suggest that such\n\"society of minds\" approach has the potential to significantly advance the\ncapabilities of LLMs and pave the way for further breakthroughs in language\ngeneration and understanding.", + "authors": "Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch", + "published": "2023-05-23", + "updated": "2023-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.12588v4", + "title": "Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks", + "abstract": "Recently, there has been significant progress in teaching language models to\nperform step-by-step reasoning to solve complex numerical reasoning tasks.\nChain-of-thoughts prompting (CoT) is by far the state-of-art method for these\ntasks. CoT uses language models to perform both reasoning and computation in\nthe multi-step `thought' process. To disentangle computation from reasoning, we\npropose `Program of Thoughts' (PoT), which uses language models (mainly Codex)\nto express the reasoning process as a program. The computation is relegated to\nan external computer, which executes the generated programs to derive the\nanswer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP,\nTabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA)\nfor both few-shot and zero-shot setups. Under both few-shot and zero-shot\nsettings, PoT can show an average performance gain over CoT by around 12\\%\nacross all the evaluated datasets. By combining PoT with self-consistency\ndecoding, we can achieve SoTA performance on all math problem datasets and\nnear-SoTA performance on financial datasets. All of our data and code are\nreleased in Github https://github.com/wenhuchen/Program-of-Thoughts", + "authors": "Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen", + "published": "2022-11-22", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.12032v5", + "title": "From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning", + "abstract": "In the realm of Large Language Models (LLMs), the balance between instruction\ndata quality and quantity is a focal point. Recognizing this, we introduce a\nself-guided methodology for LLMs to autonomously discern and select cherry\nsamples from open-source datasets, effectively minimizing manual curation and\npotential cost for instruction tuning an LLM. Our key innovation, the\nInstruction-Following Difficulty (IFD) metric, emerges as a pivotal metric to\nidentify discrepancies between a model's expected responses and its intrinsic\ngeneration capability. Through the application of IFD, cherry samples can be\npinpointed, leading to a marked uptick in model training efficiency. Empirical\nvalidations on datasets like Alpaca and WizardLM underpin our findings; with a\nmere $10\\%$ of original data input, our strategy showcases improved results.\nThis synthesis of self-guided cherry-picking and the IFD metric signifies a\ntransformative leap in the instruction tuning of LLMs, promising both\nefficiency and resource-conscious advancements. Codes, data, and models are\navailable: https://github.com/tianyi-lab/Cherry_LLM", + "authors": "Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, Jing Xiao", + "published": "2023-08-23", + "updated": "2024-04-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.08155v2", + "title": "AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation", + "abstract": "AutoGen is an open-source framework that allows developers to build LLM\napplications via multiple agents that can converse with each other to\naccomplish tasks. AutoGen agents are customizable, conversable, and can operate\nin various modes that employ combinations of LLMs, human inputs, and tools.\nUsing AutoGen, developers can also flexibly define agent interaction behaviors.\nBoth natural language and computer code can be used to program flexible\nconversation patterns for different applications. AutoGen serves as a generic\ninfrastructure to build diverse applications of various complexities and LLM\ncapacities. Empirical studies demonstrate the effectiveness of the framework in\nmany example applications, with domains ranging from mathematics, coding,\nquestion answering, operations research, online decision-making, entertainment,\netc.", + "authors": "Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, Chi Wang", + "published": "2023-08-16", + "updated": "2023-10-03", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.04091v3", + "title": "Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models", + "abstract": "Large language models (LLMs) have recently been shown to deliver impressive\nperformance in various NLP tasks. To tackle multi-step reasoning tasks,\nfew-shot chain-of-thought (CoT) prompting includes a few manually crafted\nstep-by-step reasoning demonstrations which enable LLMs to explicitly generate\nreasoning steps and improve their reasoning task accuracy. To eliminate the\nmanual effort, Zero-shot-CoT concatenates the target problem statement with\n\"Let's think step by step\" as an input prompt to LLMs. Despite the success of\nZero-shot-CoT, it still suffers from three pitfalls: calculation errors,\nmissing-step errors, and semantic misunderstanding errors. To address the\nmissing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of\ntwo components: first, devising a plan to divide the entire task into smaller\nsubtasks, and then carrying out the subtasks according to the plan. To address\nthe calculation errors and improve the quality of generated reasoning steps, we\nextend PS prompting with more detailed instructions and derive PS+ prompting.\nWe evaluate our proposed prompting strategy on ten datasets across three\nreasoning problems. The experimental results over GPT-3 show that our proposed\nzero-shot prompting consistently outperforms Zero-shot-CoT across all datasets\nby a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought\nPrompting, and has comparable performance with 8-shot CoT prompting on the math\nreasoning problem. The code can be found at\nhttps://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.", + "authors": "Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim", + "published": "2023-05-06", + "updated": "2023-05-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.04684v2", + "title": "Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind", + "abstract": "When reading a story, humans can quickly understand new fictional characters\nwith a few observations, mainly by drawing analogies to fictional and real\npeople they already know. This reflects the few-shot and meta-learning essence\nof humans' inference of characters' mental states, i.e., theory-of-mind (ToM),\nwhich is largely ignored in existing research. We fill this gap with a novel\nNLP dataset, ToM-in-AMC, the first assessment of machines' meta-learning of ToM\nin a realistic narrative understanding scenario. Our dataset consists of ~1,000\nparsed movie scripts, each corresponding to a few-shot character understanding\ntask that requires models to mimic humans' ability of fast digesting characters\nwith a few starting scenes in a new movie.\n We propose a novel ToM prompting approach designed to explicitly assess the\ninfluence of multiple ToM dimensions. It surpasses existing baseline models,\nunderscoring the significance of modeling multiple ToM dimensions for our task.\nOur extensive human study verifies that humans are capable of solving our\nproblem by inferring characters' mental states based on their previously seen\nmovies. In comparison, our systems based on either state-of-the-art large\nlanguage models (GPT-4) or meta-learning algorithms lags >20% behind,\nhighlighting a notable limitation in existing approaches' ToM capabilities.", + "authors": "Mo Yu, Qiujing Wang, Shunchi Zhang, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Liyan Xu, Jing Li, Yue Yu, Jie Zhou", + "published": "2022-11-09", + "updated": "2024-02-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.11760v7", + "title": "Large Language Models Understand and Can be Enhanced by Emotional Stimuli", + "abstract": "Emotional intelligence significantly impacts our daily behaviors and\ninteractions. Although Large Language Models (LLMs) are increasingly viewed as\na stride toward artificial general intelligence, exhibiting impressive\nperformance in numerous tasks, it is still uncertain if LLMs can genuinely\ngrasp psychological emotional stimuli. Understanding and responding to\nemotional cues gives humans a distinct advantage in problem-solving. In this\npaper, we take the first step towards exploring the ability of LLMs to\nunderstand emotional stimuli. To this end, we first conduct automatic\nexperiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,\nLlama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative\napplications that represent comprehensive evaluation scenarios. Our automatic\nexperiments show that LLMs have a grasp of emotional intelligence, and their\nperformance can be improved with emotional prompts (which we call\n\"EmotionPrompt\" that combines the original prompt with emotional stimuli),\ne.g., 8.00% relative performance improvement in Instruction Induction and 115%\nin BIG-Bench. In addition to those deterministic tasks that can be\nautomatically evaluated using existing metrics, we conducted a human study with\n106 participants to assess the quality of generative tasks using both vanilla\nand emotional prompts. Our human study results demonstrate that EmotionPrompt\nsignificantly boosts the performance of generative tasks (10.9% average\nimprovement in terms of performance, truthfulness, and responsibility metrics).\nWe provide an in-depth discussion regarding why EmotionPrompt works for LLMs\nand the factors that may influence its performance. We posit that EmotionPrompt\nheralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs\ninteraction.", + "authors": "Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie", + "published": "2023-07-14", + "updated": "2023-11-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.HC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.11672v2", + "title": "Open-ended Commonsense Reasoning with Unrestricted Answer Scope", + "abstract": "Open-ended Commonsense Reasoning is defined as solving a commonsense question\nwithout providing 1) a short list of answer candidates and 2) a pre-defined\nanswer scope. Conventional ways of formulating the commonsense question into a\nquestion-answering form or utilizing external knowledge to learn\nretrieval-based methods are less applicable in the open-ended setting due to an\ninherent challenge. Without pre-defining an answer scope or a few candidates,\nopen-ended commonsense reasoning entails predicting answers by searching over\nan extremely large searching space. Moreover, most questions require implicit\nmulti-hop reasoning, which presents even more challenges to our problem. In\nthis work, we leverage pre-trained language models to iteratively retrieve\nreasoning paths on the external knowledge base, which does not require\ntask-specific supervision. The reasoning paths can help to identify the most\nprecise answer to the commonsense question. We conduct experiments on two\ncommonsense benchmark datasets. Compared to other approaches, our proposed\nmethod achieves better performance both quantitatively and qualitatively.", + "authors": "Chen Ling, Xuchao Zhang, Xujiang Zhao, Yanchi Liu, Wei Cheng, Mika Oishi, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao", + "published": "2023-10-18", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14992v2", + "title": "Reasoning with Language Model is Planning with World Model", + "abstract": "Large language models (LLMs) have shown remarkable reasoning capabilities,\nespecially when prompted to generate intermediate reasoning steps (e.g.,\nChain-of-Thought, CoT). However, LLMs can still struggle with problems that are\neasy for humans, such as generating action plans for executing tasks in a given\nenvironment, or performing complex math, logical, and commonsense reasoning.\nThe deficiency stems from the key fact that LLMs lack an internal\n$\\textit{world model}$ to predict the world $\\textit{state}$ (e.g., environment\nstatus, intermediate variable values) and simulate long-term outcomes of\nactions. This prevents LLMs from performing deliberate planning akin to human\nbrains, which involves exploring alternative reasoning paths, anticipating\nfuture states and rewards, and iteratively refining existing reasoning steps.\nTo overcome the limitations, we propose a new LLM reasoning framework,\n$\\underline{R}$easoning vi$\\underline{a}$ $\\underline{P}$lanning\n$\\textbf{(RAP)}$. RAP repurposes the LLM as both a world model and a reasoning\nagent, and incorporates a principled planning algorithm (based on Monto Carlo\nTree Search) for strategic exploration in the vast reasoning space. During\nreasoning, the LLM (as agent) incrementally builds a reasoning tree under the\nguidance of the LLM (as world model) and task-specific rewards, and obtains a\nhigh-reward reasoning path efficiently with a proper balance between\nexploration $\\textit{vs.}$ exploitation. We apply RAP to a variety of\nchallenging reasoning problems including plan generation, math reasoning, and\nlogical inference. Empirical results on these tasks demonstrate the superiority\nof RAP over various strong baselines, including CoT and least-to-most prompting\nwith self-consistency. RAP on LLAMA-33B surpasses CoT on GPT-4 with 33%\nrelative improvement in a plan generation setting.", + "authors": "Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu", + "published": "2023-05-24", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.10625v3", + "title": "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models", + "abstract": "Chain-of-thought prompting has demonstrated remarkable performance on various\nnatural language reasoning tasks. However, it tends to perform poorly on tasks\nwhich requires solving problems harder than the exemplars shown in the prompts.\nTo overcome this challenge of easy-to-hard generalization, we propose a novel\nprompting strategy, least-to-most prompting. The key idea in this strategy is\nto break down a complex problem into a series of simpler subproblems and then\nsolve them in sequence. Solving each subproblem is facilitated by the answers\nto previously solved subproblems. Our experimental results on tasks related to\nsymbolic manipulation, compositional generalization, and math reasoning reveal\nthat least-to-most prompting is capable of generalizing to more difficult\nproblems than those seen in the prompts. A notable finding is that when the\nGPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve\nthe compositional generalization benchmark SCAN in any split (including length\nsplit) with an accuracy of at least 99% using just 14 exemplars, compared to\nonly 16% accuracy with chain-of-thought prompting. This is particularly\nnoteworthy because neural-symbolic models in the literature that specialize in\nsolving SCAN are trained on the entire training set containing over 15,000\nexamples. We have included prompts for all the tasks in the Appendix.", + "authors": "Denny Zhou, Nathanael Sch\u00e4rli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi", + "published": "2022-05-21", + "updated": "2023-04-16", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.01337v2", + "title": "An Empirical Study on Challenging Math Problem Solving with GPT-4", + "abstract": "Employing Large Language Models (LLMs) to address mathematical problems is an\nintriguing research endeavor, considering the abundance of math problems\nexpressed in natural language across numerous science and engineering fields.\nWhile several prior works have investigated solving elementary mathematics\nusing LLMs, this work explores the frontier of using GPT-4 for solving more\ncomplex and challenging math problems. We evaluate various ways of using GPT-4.\nSome of them are adapted from existing work, and one is MathChat, a\nconversational problem-solving framework newly proposed in this work. We\nperform the evaluation on difficult high school competition problems from the\nMATH dataset, which shows the advantage of the proposed conversational\napproach.", + "authors": "Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang", + "published": "2023-06-02", + "updated": "2023-06-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + } +] \ No newline at end of file