diff --git "a/related_34K/test_related_short_2405.01379v1.json" "b/related_34K/test_related_short_2405.01379v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2405.01379v1.json" @@ -0,0 +1,1427 @@ +[ + { + "url": "http://arxiv.org/abs/2405.01379v1", + "title": "Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving", + "abstract": "Natural language explanations have become a proxy for evaluating explainable\nand multi-step Natural Language Inference (NLI) models. However, assessing the\nvalidity of explanations for NLI is challenging as it typically involves the\ncrowd-sourcing of apposite datasets, a process that is time-consuming and prone\nto logical errors. To address existing limitations, this paper investigates the\nverification and refinement of natural language explanations through the\nintegration of Large Language Models (LLMs) and Theorem Provers (TPs).\nSpecifically, we present a neuro-symbolic framework, named Explanation-Refiner,\nthat augments a TP with LLMs to generate and formalise explanatory sentences\nand suggest potential inference strategies for NLI. In turn, the TP is employed\nto provide formal guarantees on the logical validity of the explanations and to\ngenerate feedback for subsequent improvements. We demonstrate how\nExplanation-Refiner can be jointly used to evaluate explanatory reasoning,\nautoformalisation, and error correction mechanisms of state-of-the-art LLMs as\nwell as to automatically enhance the quality of human-annotated explanations of\nvariable complexity in different domains.", + "authors": "Xin Quan, Marco Valentino, Louise A. Dennis, Andr\u00e9 Freitas", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "5.1 LLMs Self-Refinement from External Feedback Self-refinement of LLMs has demonstrated promising effectiveness in generating faithful and trustworthy responses (Pan et al., 2023b). The use of external feedback to guide LLMs has been extensively studied (Olausson et al., 2024a; Yu et al., 2023; Akyurek et al., 2023). Previous work such as Peng et al. (2023) and Li et al. (2024) have employed facts retrieved from external knowledge bases as sources of feedback, while Paul et al. (2024) developed a critic model to provide feedback for reasoning refinement. Additionally, Nathani et al. (2023) have explored the use of feedback models for automated feedback generation. Various works have also investigated tasks related to code generation (Chen et al., 2023; Olausson et al., 2024b) and the creation of either synthetic or expert-written logical natural language expressions (Olausson et al., 2023). Quan et al. (2024) leverage a differentiable logic reasoner to verify and refine explanations through abductive reasoning , enhancing the logical consistency of explanations in ethical NLI tasks based on the solver\u2019s output. This paper focuses on the automated refinement of natural language sentences created by human annotators. It verifies and refines the logical validity of these human-generated sentences using detailed external feedback, which can identify the exact erroneous steps to effectively refine logical errors in the explanatory sentences. 5.2 Autoformalisation Autoformalisation refers to the process of translating natural language descriptions into symbolic representations. Research in this area has included the formalisation of mathematical proofs (Cunningham et al., 2022; Wu et al., 2022; First et al., 2023), and efforts to transform natural language sentences into logical forms using LLMs (Pan et al., 2023a; Olausson et al., 2023; Jiang et al., 2024b; Dalal et al., 2024). However, contextual information is frequently lost when sentences are translated in these logical frameworks. To mitigate semantic loss during the transformation process, we leverage Neo-Davidsonian event semantics, which aims to maximise the preservation sentence-level content. This representation paradigm can facilitate a more systematic content-preserving translation to logical forms, which is more independent from particular choices of representation schemas. 5.3 Conclusion In this work, we present a novel neuro-symbolic framework that automatically verifies and refines natural language explanations using iterative refinement cycles between LLMs and theorem provers (TPs). We have conducted extensive experiments on both textual entailment and multiplechoice question answering tasks, demonstrating that the proposed method, Explanation-Refiner, effectively enhances the logical validity of such human-annotated explanations. We investigated the model\u2019s performance from simple to complex explanatory/sentence structures and introduced a method to prevent the loss of semantic information in autoformalisation tasks with error correction. Furthermore, we investigated the iterative model at each iteration to evaluate how the explanation is refined at each cycle. In future work, we aspire to enhance the framework\u2019s robustness towards complex and unstructured explanations with fewer iterations required to improve the model\u2019s efficiency. 5.4 Limitations While this work have demonstrated significant improvements in terms of enhancing the logical consistency of explanations, the connection between improved logical consistency and AI safety still needs further investigation. While the concept of using formal solvers in conjunction with LLMs delivers a promise avenue to improve the consistency of reasoning within LLMs, these methodologies needs to be further developed and critically assessed as a mechanism which can provide guarantees of correctness, consistency and completeness within critical application domains.", + "pre_questions": [], + "main_content": "Introduction A recent line of research in Natural Language Inference (NLI) focuses on developing models capable of generating natural language explanations in support of their predictions (Thayaparan et al., 2021; Chen et al., 2021; Valentino et al., 2022; Bostrom et al., 2022; Weir et al., 2023). Since natural language explanations can be used as a proxy to evaluate the underlying reasoning process of NLI models (Kumar and Talukdar, 2020; Zhao and Vydiswaran, 2021; Chen et al., 2021), researchers have proposed different methods for assessing their intrinsic quality (Wiegreffe and Marasovic, 2021; Camburu et al., 2020; Valentino et al., 2021; Atanasova et al., 2023; Quan et al., 2024; Dalal et al., 2024), including the adoption of language generation metrics for a direct comparison between models\u2019 generated explanations and human-annotated explanations. However, this process is subject to different types of limitations. First, the use of language generation metrics requires the crowd-sourcing of explanation corpora to augment existing NLI datasets (Wiegreffe and Marasovic, 2021), a process that is typically time-consuming and susceptible to errors (Liu et al., 2022; Zhao et al., 2023). Second, language generation metrics have been shown to fail capturing fine-grained properties that are fundamental for NLI such as logical reasoning, faithfulness, and robustness (Atanasova et al., 2023; Camburu et al., 2020; Chan et al., 2022; Quan et al., 2024). Third, human explanations in NLI datasets tend to be incomplete and contain logical errors that could heavily bias the evaluation (Elazar et al., 2021; Valentino et al., 2021). In this paper, we investigate the integration of state-of-the-art LLM-based explanation generation models for NLI with external logical solvers to jointly evaluate explanatory reasoning (Pan et al., 2023a; Olausson et al., 2023; Jiang et al., 2024b) and enhance the quality of crowd-sourced explanations. In particular, we present a neuro-symbolic framework, named Explanation-Refiner, that augments a Theorem Prover (TP) with Large Language Models (LLMs) to investigate the following research questions: RQ1: \u201cCan the integration of LLMs and TPs provide a mechanism for automatic verification and refinement of natural language explanations?\u201d; RQ2: \u201cCan the integration of LLMs and TPs improve the logical validity of humanannotated explanations?\u201d; RQ3: \u201cTo what extent are state-of-the-art LLMs capable of explanatory reasoning, autoformalisation, and error correction for NLI in different domains?\u201d. To answer these questions, Explanation-Refiner employs LLMs to generate and formalise explanatory sentences and to suggest potential inference strategies for buildarXiv:2405.01379v1 [cs.CL] 2 May 2024 ing non-redundant, complete, and logically valid explanations for NLI. In turn, the TP is adopted to verify the validity of the explanations through the construction of deductive proofs and the generation of fine-grained feedback for LLMs. We instantiate Explanation-Refiner with state-ofthe-art LLMs (i.e., GPT-4 (OpenAI, 2023), GPT3.5 (Brown et al., 2020), LLama (Touvron et al., 2023), and Mistral (Jiang et al., 2024a)) and the Isabelle/HOL proof assistant (Nipkow et al., 2002) utilising Neo-Davidsonian event semantics (Parsons, 1990) coupled with First-Order Logic (FOL) to effectively and systematically translate natural language sentences into logical forms. Our empirical analysis carried out on three NLI datasets of variable complexity (i.e., e-SNLI (Camburu et al., 2018), QASC (Khot et al., 2019), and WorldTree (Jansen et al., 2018)) reveals that external feedback from TPs is effective in improving the quality of natural language explanations, leading to an increase in logical validity using GPT-4 from 36% to 84%, 12% to 55%, and 2% to 37% (on e-SNLI, QASC, and WorldTree respectively). At the same time, the results demonstrate that integrating external TPs with LLMs can reduce errors in autoformalisation, with an average reduction of syntax errors of 68.67%, 62.31%, and 55.17%. Finally, we found notable differences in performance across LLMs and NLI datasets, with closed-sourced LLMs (i.e., GPT-4 and GPT-3.5) significantly outperforming open-source models (i.e., Mistral and LLama) on both explanatory reasoning and autoformalisation, along with a shared tendency of LLMs to struggle with increasing explanation complexity. To summarise, the main contributions of this paper are: 1. We introduce Explanation-Refiner, a novel neuro-symbolic framework that integrates LLMs with an external theorem prover. This framework automatically verifies and refines explanatory sentences in NLI tasks using an objective external feedback. 2. We utilise Neo-Davidsonian event semantics coupled with FOL to effectively translate natural language sentences into logical forms to minimises semantic information loss. Additionally, we introduce a novel method that leverages a theorem prover and a proof assistant for verifying NLI explanations and a syntactic refiner to minimise syntax errors in responses generated by LLMs. 3. We conduct a comprehensive series of experiments to assess Explanation-Refiner across five LLMs on three datasets, which include between 1 to 16 explanatory sentences. These experiments span a range of tasks from simple textual entailment to complex multiple-choice question answering in different context domains. 4. We perform extensive quantitative and qualitative analyses to explore the explanation refinement process. Our investigations delve into the LLMs\u2019 inference capabilities, revealing the strengths and limitations of different models in producing verifiable and explainable logical reasoning for NLI. 2 Explanation Verification and Refinement Explanation-based NLI is widely adopted to evaluate the reasoning process of multi-step inference models via the construction of natural language explanations. In this work, we refer to the following formalisation for Explanation-based NLI: given a premise sentence pi, a hypothesis sentence hi, and an explanation Ei consisting of a set of facts {f1, f2, ..., fn}, the explanation Ei is logically valid if and only if the entailment pi\u222aEi | = hi holds. This entailment is considered verifiable if {pi, Ei, hi} can be translated into a set of logical forms \u03a6 that compose a theory \u0398. The validity of this theory, \u0398, is subsequently determined by a theorem prover, verifying whether \u0398 \u22a8\u03c8, where \u03c8 represents a logical consequence derived from the logical form of hi. We aim to automatically verify the logical validity of explanation Ei, and if \u0398 \u22a8\u03c8 is rejected by the theorem prover, a further refinement stage should be initiated to refine the facts {f1, f2, ..., fn} based on external feedback, resulting in an updated explanation E\u2032 i. Thus, an explanation is accepted if all the facts are logically consistent, complementary and non-redundant to support the derivation. 3 Explanation-Refiner To verify the logical validity and refine any logical errors in explanatory sentences for NLI tasks, we (b) Inference, Verification and Refinement (a) Axiomatisation Premise:\u00a0A man gives a speech at an ornate costume party. Hypothesis:\u00a0A man is public\u00a0 \u00a0speaking. Initial Explanation:\u00a0 If someone gives a speech that\u00a0 \u00a0means that they are speaking. LLM Autoformalisation axiom_1: \"\u2200x y e1 e2. Someone x \u2227 Speech y \u2227 Gives e1 \u2227 Agent e1 x \u2227 Patient e1 y \u27f6 Speaking e2 \u2227 Agent e2 x\" Theorem hypothesis: assumes asm: \"Man x \u2227 Speech y \u2227 Party z \u2227 Ornate z \u2227 Costume z \u2227 Gives e \u2227 Agent e x \u2227 Patient e y \u2227 At x z\" shows \"\u2203x e. Man x \u2227 Speaking e \u2227 Agent e x\" Neo-Davidsonian Event Semantics LLM LLM Rough\u00a0 Inference \u00a0 \u00a0 1. To infer the hypothesis, we need to find the\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 information of a man and the action of\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 public speaking. \u00a0 \u00a0 \u00a0.... \u00a0 \u00a0 \u00a05. By combining these steps, we can infer the\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0hypothesis by satisfying the information of\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0man (from premise) and public speaking\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (from premise and explanation 1). \u00a0 \u00a0There are no redundant or not directly related\u00a0 \u00a0 \u00a0 \u00a0explanation sentences.\u00a0The proof steps use\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0explanation 1 and the premise sentence. Proof Strategy LLM Autoformalise\u00a0 Proof Methods \u00a0proof from asm have \"Man x \u2227 Speech y \u2227 Gives e \u2227 Agent\u00a0 \u00a0 \u00a0e x \u2227 Patient e y\" by blast then have \"Man x \u2227 Speaking e \u2227 Agent e x\" using explanation_1 by blast then show ?thesis using asm by blast qed Solvable Valid Explanation Drop\u00a0irrelevant facts and\u00a0 Refine\u00a0based on the feedback\u00a0 from the\u00a0theorem prover \u00a0Failed at\u00a0then have \"Man x \u2227 Speaking e \u2227 Agent e x\"\u00a0 \u00a0using explanation_1 by blast Unsolvable. Feedback on invalid\u00a0steps LLM \u00a0Refined Explanation:\u00a0 \u00a0If a man gives a speech, that\u00a0 \u00a0means he is public speaking. Theorem\u00a0 Prover No Syntax Error Syntax Error Step (1) Step (2) Step (3) Step (4) Step (5) Step (6) Theorem Prover Refine Syntax Verification New\u00a0 Iteration Figure 1: The overall pipeline of Explanation-Refiner: An NLI problem is converted into axioms and theorems as a theorem prover\u2019s theory, along with some proof steps derived from a preliminary inference to send to a theorem prover. In case the proof fails (logically invalid), the erroneous step, along with the proof strategy and proof steps, are extracted as feedback to refine the explanatory sentences in a new iteration. present a neuro-symbolic framework, ExplanationRefiner, to iteratively check and refine the explanation Ei based on external feedback; Isabelle (Paulson, 1994) and its proof assistant Isabelle/HOL (Nipkow et al., 2002) are applied as external tools to make the logical deduction and provide feedback for explanation refinement. Figure 1 shows an overview of our proposed framework. Given an NLI task, to evaluate the logical validity of the entailment, the LLM is prompted to perform an autoformalisation process that transforms natural language sentences into formal language represented in the form of Isabelle theory. Each fact f \u2208Ei is converted into an axiom ai, where each ai is an element of the set A = {a1, a2, ..., an}. The premise pi and corresponding hypothesis hi, is converted into a theorem for proving pi \u2227B \u2192hi, where B \u2286A. A syntax refinement mechanism is subsequently applied to the previously transferred symbolic forms. The theorem prover is implemented as a checker to identify any syntax errors and provide these error details as feedback to an LLM, enabling the LLM to iteratively correct the syntax errors over a fixed number of iterations, denoted by t. With the proof assistant, we can perform automated reasoning in the theorem prover by constructing proof steps. Step 3 initially generate a rough inference to state a preliminary proof strategy in natural language sentences and elicit the facts f \u2208Ei which are non-redundant and essential for entailing the hypothesis hi. Based on this preliminary proof strategy, the LLM is prompted to construct and formalise the proof steps for proving the theorem. In step 5, the theorem prover will verify the constructed theory by attempting to prove the theorem. If it is solvable, we state it as a logically valid explanation. If the prover failed at one of the proof steps, we extract this failed step along with the applied axioms B \u2286A as the external feedback for an LLM. This feedback is used to refine the logical error within B and consequently refine the facts f \u2208Ei, which were previously converted from natural language. 3.1 Autoformalisation In order to formally verify the logical validity of the explanations, we adopted Neo-Davidsonian eventbased semantics and FOL. Neo-Davidsonian Event Semantics Preventing the loss of semantic information during the repretheorem hypothesis: (* Premise: A smiling woman is playing the violin in front of a turquoise background. *) assumes asm: \"Woman x \u2227Violin y \u2227Background z \u2227Turquoise z \u2227Smiling x \u2227Playing e \u2227Agent e x \u2227Patient e y \u2227InFrontOf x z\" (* Hypothesis: A woman is playing an instrument. *) shows \"\u2203x y e. Woman x \u2227Instrument y \u2227Playing e \u2227Agent e x \u2227Patient e y\" Figure 2: An example of representing the premise and hypothesis sentences in Isabelle theorem proof from asm have \"Woman x \u2227Violin y \u2227Playing e \u2227Agent e x \u2227Patient e y\" by blast then have \"Woman x \u2227Instrument y \u2227Playing e \u2227Agent e x \u2227Patient e y\" using explanation_1 by blast then show ?thesis using asm by blast qed Figure 3: An example of proof constructed by the Isabelle/HOL proof assistant to verify the hypothesis. sentation of natural language sentences in logical forms, such as FOL, poses significant challenges when using LLMs, particularly with long and complex sentences that are crucial for logical reasoning (Olausson et al., 2023). Neo-Davidsonian event semantics (Parsons, 1990) utilises event variables to represent the verb predicates and their corresponding object arguments as semantic roles. This approach establishes a predicate-argument structure that preserves the information content and faithfulness of complex sentences, closer to the surface form of the sentence. For example, the sentence \u2018A wolf eating a sheep is an example of a predator hunting prey\u2019 can be formalised as follows: \u2200xye1(wolf(x) \u2227sheep(y) \u2227eating(e1) \u2227agent(e1, x) \u2227patient(e1, y) \u2192 (\u2203e2 predator(x) \u2227prey(y)\u2227 hunting(e2) \u2227agent(e2, x)\u2227 patient(e2, y) \u2227example(e1, e2))) (1) In 1, the verbs are represented as the events \u2018eating\u2019 and \u2018hunting,\u2019 where the agent and patient arguments correspond to the entities performing and receiving the actions within these events, respectively. The logical form example(e1, e2) explicitly captures the semantic meaning of this sentence: the event of a wolf eating a sheep as an exemplar of a predator hunting prey. Similarly, whenever there are no action verbs involved in a sentence, we utilise FOL to represent the static or descriptive aspects. For instance: \u2200x(gravity(x) \u2192force(x)) (2) \u2200xy(greater(x, y) \u2192larger(x, y)) (3) The above logical forms correspond to the sentences \u2018gravity is a kind of force\u2019 and \u2018greater means larger\u2019, respectively. Isabelle Theory Construction For the Isabelle theorem prover, a theory script is essential to facilitate the proof of a theorem. Therefore, we designate explanatory sentences as axioms: (* Explanation 1: A violin is an instrument. *) axiomatization where explanation_1: \"\u2200x. Violin x \u2212 \u2192Instrument x\" Additionally, as illustrated in Figure 2, both the premises and hypothesis are defined as constituting parts of the theorem. The \u2018assumes asm\u2019 clause generally comprises unquantified, specific propositions or conjunctions of propositions, which are recognised as known truths. The \u2018show\u2019 clause denotes the conclusion (hypothesis) for which we seek to establish proof through logical deductions based on the assumed propositions and axioms. Syntax Error Refiner Recent studies (Gou et al., 2024; Olausson et al., 2023) have revealed persistent syntax errors when prompting LLMs for code and symbolic form generation tasks. The method proposed by Pan et al. (2023a) utilises error messages from symbolic solvers to iteratively refine LLM outputs. Following a similar approach, we categorised the syntax errors into two distinct subdomains based on feedback from Isabelle: type unification errors and other syntax errors. Type unification errors primarily arise from mismatches between declared and actual argument types in logical clauses. Other syntax errors typically involve missing brackets, undefined entity names, or invalid logical symbols. Our process involves using Isabelle to identify syntax errors in the transferred theory, extracting these error messages, and then prompting the LLM with these messages along with few-shot examples. This guides the model on how to correct each type of syntax error over a series of iterations, allowing for continuous verification and refinement. Details of the autoformalisation prompts are described in Appendix A.4.1. 3.2 Proof Construction A proof provides a detailed, step-by-step strategy that elucidates the logical connections and unification among axioms to support the reasoning process aimed at achieving the solver\u2019s goal. Initially, we prompt the LLM to create a preliminary proof to assess how it infers the hypothesis and to identify which explanatory sentences are relevant, redundant, or unrelated. Based on this initial proof, we then guide the LLM to develop an Isabelle proof (figure 3) that utilise the Isabelle/HOL proof assistant to clearly demonstrate the explanatory sentences (axioms) required to prove the hypothesis. Details of proof construction process prompts are described in Appendix A.4.2. 3.3 Verify and Refine Finally, the constructed theory, which includes axioms, theorems, and proof steps, is submitted to the theorem prover for verification. If the theory is validated, it outputs a logically sound explanation. If the proof fails or timeouts, we extract the first error from the solver\u2019s error message, identify the corresponding proof step, and locate the related explanatory sentences (axioms) from the theory. We begin by removing redundant and irrelevant facts that are not present in the preceding Isabelle proof steps or are declared as such in the text inference strategy. Then, we prompt the LLM to refine the explanatory sentences by providing it with the error message, the failed proof step, the associated proof strategy, and the relevant explanatory sentences for further iteration. This process is iterative and progressive; with each iteration, the framework addresses one or more logical errors, continually refining the explanatory sentences to ultimately yield a logically valid and verifiable explanation. Additional details on the prompts used for refinement are described in Appendix A.4.3. 4 Empirical Evaluation To assess the effectiveness of integrating LLMs with external feedback for refinement purposes, we evaluated the Explanation-Refiner on two NLI tasks: textual entailment and multiple-choice question answering. For textual entailment, given a premise pi and an explanatory sentence as explanation Ei, the goal is to determine whether pi and Ei together entail a hypothesis hi. In the multiplechoice question answering task, there is a question q accompanied by a set of candidate answers C = {c1, c2, ..., cn}, with ci identified as the correct answer. We convert q and the correct answer ci as the hypothesis hi. Explanatory facts serve as evidence supporting ci as the correct answer and are denoted as Ei, with the question\u2019s context sentence as the premise pi. 4.1 Datasets We adopted three different NLI datasets for evaluation: e-SNLI (Camburu et al., 2018), QASC (Khot et al., 2019), and WorldTree (Jansen et al., 2018), using a total of 300 samples selected via the sampling strategy defined in (Valentino et al., 2021), which maximises representativeness and mutual exclusivity across syntactic and semantic features expressed in the datasets. e-SNLI is a crowd-sourced dataset typically used as a benchmark for textual entailment. It comprises one premise sentence, one explanatory sentence, and one hypothesis sentence per sample. QASC and WorldTree, on the other hand, are datasets designed for multiple-choice question ans wering within the scientific domain. QASC includes two explanatory sentences for each correct answer. WorldTree is the most complex of the three, featuring between 1 and 16 explanatory sentences in each sample. 4.2 Theorem Prover We adopted Isabelle (Paulson, 1994) as our theorem prover and Isabelle/HOL (Nipkow et al., 2002) as the proof assistant. To integrate this theorem prover as a real-time verification tool with LLMs, we employ a Python client (Shminke, 2022) as TCP (Transmission Control Protocol) client to configure Isabelle as a server. This enables the communication of the constructed theory files and the extraction of the response messages from Isabelle. Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 0.0 16.0 22.0 19.0 36.0 7.0 32.0 36.0 55.0 84.0 6.0 3.28 1.58 2.93 1.96 e-SNLI Initially Valid Explanations Finally Valid Explanations Number of Iterations (a) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 3.0 3.0 0.0 6.0 12.0 25.0 12.0 15.0 44.0 55.0 3.6 3.58 4.8 4.11 3.55 QASC Initially Valid Explanations Finally Valid Explanations Number of Iterations (b) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 0.0 1.0 0.0 0.0 2.0 3.0 6.0 5.0 29.0 37.0 3.33 5.5 4.6 5.38 4.41 WorldTree Initially Valid Explanations Finally Valid Explanations Number of Iterations (c) Figure 4: The initial and final number of logically valid explanations, along with the average iteration times required to refine an explanation for each LLM 0 2 4 6 8 10 Iteration Times 0 20 40 60 80 100 Number of Refined Explanations e-SNLI mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (a) 0 2 4 6 8 10 Iteration Times 0 20 40 60 80 100 Number of Refined Explanations QASC mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (b) 0 2 4 6 8 10 Iteration Times 0 20 40 60 80 100 Number of Refined Explanations WorldTree mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (c) Figure 5: Number of successfully refined explanations at each iteration step. 4.3 Models We conducted experiments using five LLMs within the proposed framework. The models include two open-sourced models: Llama2-70b (Touvron et al., 2023) and Mixtral-8x7b (Jiang et al., 2024a), as well as Mistral-Small (mistral-small-latest) (Mistral AI, 2024), GPT-Turbo-3.5 (gpt-3.5-turbo) (Brown et al., 2020), and GPT-4 (gpt-4-0613) (OpenAI, 2023). Temperature settings were adjusted to 0 for GPT-Turbo-3.5 and GPT-4, and to 0.01 for Llama2-70b, Mixtral-8x7b, and Mistral-Small, aiming to achieve both determinism in the output and effective code generation for theorem prover. 4.4 Results Providing feedback on the exact failure step from an external theorem prover effectively guides LLMs in continuously verifying and refining explanations in NLI tasks. To assess the effectiveness of employing an external theorem prover to verify and refine explanations in NLI tasks, we conducted a comparative analysis across various LLMs (Figure 4). The initially valid explanations represent the percentage of explanations that can be verified as logically valid without any further iteration. Although the initial verification results varied among different models, all LLMs demonstrated a consistent improvement in refining the logical validity of the explanations. This process highlights the positive impact of the external feedback but also shows significant differences between models. We found that lower rates of initial valid explanations often resulted from syntactic errors, which impeded the theorem prover\u2019s ability to generate proofs. Despite this initial variability, all models demonstrate a consistent improvement in the refinement process across the datasets. Notably, GPT-4 outperformed other models, improving the validity of explanations by 48%, 43%, and 35% across the three datasets, respectively, within a maximum number of ten iterations (Figure 4). Figure 5 shows the number of explanations refined at each iteration across the e-SNLI, QASC, and WorldTree datasets. On average, we found that an increasing number of iterations leads to increasing refinement, with models requiring an average of five iterations across the datasets. Explanation length/complexity impacts formalisation. The e-SNLI dataset, which includes only a single explanatory sentence per example, shows the best overall performance. In contrast, the multiple-choice question answering datasets, QASC and WorldTree, exhibit comparatively lower Llama2-70b Mixtral-8x7b Mistral-Small GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Avg. Number of Theories Contain Syntax Errors 75.18 58.55 47.0 33.27 7.82 64.55 31.82 23.0 17.64 2.45 e-SNLI (a) Llama2-70b Mixtral-8x7b Mistral-Small GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Avg. Number of Theories Contain Syntax Errors 50.18 54.45 50.36 46.1 20.27 25.64 41.45 38.27 22.18 7.64 QASC (b) Llama2-70b Mixtral-8x7b Mistral-Small GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Avg. Number of Theories Contain Syntax Errors 51.27 68.45 63.63 61.73 22.91 41.27 53.72 54.27 35.64 10.27 WorldTree (c) Figure 6: The average number of theories containing syntactic errors before and after the syntax refinement process Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Number of Refined Explanations 7 32 36 55 84 65 67 69 74 84 e-SNLI TI+AF(Base model) TI+AF(GPT-4) (a) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Number of Refined Explanations 25 12 15 44 55 42 44 45 48 55 QASC TI+AF(Base model) TI+AF(GPT-4) (b) Llama2-70b Mixtral-8x7b Mistral-SmallGPT-3.5-T urbo GPT-4 0 20 40 60 80 100 Number of Refined Explanations 3 6 5 29 37 26 28 31 34 37 WorldTree TI+AF(Base model) TI+AF(GPT-4) (c) Figure 7: AF represents the autoformalisation components, and TI represents the textual inference components. TI+AF (Base Model) indicates the use of the base model for both the autoformalisation and textual inference components. TI+AF (GPT-4) indicates the use of GPT-4 for the autoformalisation components, while the base model is used for textual inference. performance. QASC typically contains 2 explanatory sentences, while WorldTree ranges from 1 to 16 sentences. As the number of explanatory sentences increases, so does the complexity of the logical reasoning required. The WorldTree dataset, in particular, poses the greatest challenge due to its demand for multi-hop inference strategies. Models show lower refining performance in WorldTree when compared to e-SNLI and QASC, with only 3%, 5%, and 6% of Llama-70b, Mixtral-8x7b, and Mistral-Small explanations being refined in WorldTree. Meanwhile, 29% and 37% of explanations are refined by GPT-3.5-Turbo and GPT-4 in WorldTree, respectively. This process involves synthesising multiple explanatory sentences to fulfill sub-goals, which must then be integrated to meet the overall hypothesis goal. Iterative and categorical refinement can monotonically reduce syntax errors in responses generated by LLMs. To evaluate the syntax error refinement stage, we quantified the presence of syntax errors in the Isabelle theories both before and after the iterative refinement process. After a maximum of three iterations, all models showed significant reductions, with maximum reductions of 68.67%, 62.31%, and 55.17% from 7.82 to 2.45, 20.27 to 7.64, and 22.91 to 10.27 across the three respective datasets (see Figure 6). While models like Llama2-70b and Mixtral-8x7b still exhibit some syntax errors in the refined theories\u2019 code, this is primarily due to their inability to perform complex autoformalisation, especially for multiple and more complex explanatory sentences such as those in the WorldTree dataset. This result is consistent with the percentage of explanations that were successfully refined across the models, which suggests that the autoformalisation process plays a critical role in the models\u2019 logical reasoning capability. 4.5 Ablation Study We conducted an ablation study to further evaluate and disentangle the impact of autoformalisation on performance. To this end, we adopted GPT-4 exclusively for the autoformalisation component, while retaining the original models for explanation refinement and proof strategy generation. As shown in Figure 7, integrating GPT-4 for autoformalisation led to a significant increase in the number of explanations successfully refined across all models. For instance, Llama2-70b with GPT-4 as the formalisation component refined explanations from 7% to 65% in the e-SNLI dataset. For the multiplechoice question answering dataset, GPT-3.5-Turbo showed a relatively smaller increase from 44% to 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Refined e-SNLI mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (a) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Refined QASC mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (b) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Refined WorldTree mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (c) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Unrefined e-SNLI mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (d) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Unrefined QASC mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (e) 0 2 4 6 8 10 12 14 T otal Suggested Proof Steps 0 1 2 3 4 5 6 7 8 Avg. Processed Proof Steps Unrefined WorldTree mixtral-8x7b mistral-small gpt-3.5-turbo gpt-4 llama2_70b (f) Figure 8: Average of proof steps processed by the proof assistant against the total proof steps suggested by the LLMs in refined and unrefined explanations. 48% and from 29% to 34%. Despite these improvements, a performance gap persists between GPT-4 and the other models, which is attributed to GPT-4\u2019s superior symbolic reasoning capabilities required for explanation refinement from the identified logical errors. Explanations are progressively made more complete and consistent through iterative symbolic refinement. In order to deliver step-wise logical consistency, explanations need to be made complete and self-contained, leading to the introduction of additional explanatory sentences, leading to an increase in the total number of suggested proof steps. Therefore, we further evaluated how the proof steps vary when the total number of suggested proof steps increases contrasting both refined and unrefined cases. Figure 8 illustrates this trend. In general, all models show a positive trend, as the total suggested proof steps increase, the average number of proof steps processed by the proof assistant also increases. Models like Mistral-Small and GPT-3.5-Turbo tend to suggest more proof steps to accomplish the logical goal, which can result in some redundant steps, such as the significant pulse shown in Figure 8c. For unrefined explanations, as shown in Figure 8d, 8e and 8f, the progression is steadier but retains a positive trend, where the models generally suggest more proof steps in response to the additional explanatory sentences introduced to correct a logical error identified from the erroneous step. We also conducted experiments on the relationship of average number of successfully processed explanatory sentences in one proof against total planned explanatory sentences in a suggest proof in appendix A.3. Examples of refined and unrefined explanations can be found in Appendix A.5. 4.6 Factual Errors and Trivial Explanations In addition to evaluating the logical validity of explanations, we also conducted a human evaluation of the refined explanations, considering factual correctness and explanation triviality, for the two bestperforming models (GPT-3.5-Turbo and GPT-4). This evaluation focused on two questions: \u201cAre the refined explanatory sentences factually correct?\u201d and \u201cIs the explanation trivial, merely repeating or paraphrasing the content of the premise and hypothesis to achieve logical validity?\u201d. As illustrated in Figure 9, our findings indicate that all refined explanations in the e-SNLI and WorldTree datasets are consistent with commonsense knowledge. In the QASC dataset, 2.27% and 1.82% of the explanation refined by GPT-3.5-Turbo and GPT4 contain sentences misaligned with true world GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 ns 1 Percentage of Explanations 100 100 100 100 e-SNLI Factually Correct Not Trivial (a) GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 ns 1 Percentage of Explanations 97.73 98.18 95.45 98.18 QASC Factually Co 98.18 Factually Correct Not Trivial (b) GPT-3.5-T urbo GPT-4 0 20 40 60 80 100 ns 1 Percentage of Explanations 100.0 100.0 WorldTree 86.21 100.0 97.3 WorldTree Factually Co 97.3 Factually Correct Not Trivial (c) Figure 9: Human evaluation of refined explanations in terms of factuality and triviality. Factually Correct indicates the percentage of explanation sentences that are correct in terms of commonsense knowledge. Not Trivial states the percentage of explanations that contain only explanatory sentences that repeat or paraphrase the premise and/or the hypothesis. knowledge. We found that the majority of these errors result from over-generalisation, such as the sentence All tetrapods are defined to have four limbs, which inaccurately includes snakes. Finally, we found a relatively low number of explanations that repeat or paraphrase the content of premise and hypothesis. This phenomenon is absent in e-SNLI and becomes more evident when the explanatory sentences increase in complexity (i.e., WorldTree), leading models sometimes to generate explanations that do not include any additional information for the entailment to hold. This work was partially funded by the Swiss National Science Foundation (SNSF) project NeuMath (200021_204617), by the EPSRC grant EP/T026995/1, \u201cEnnCore: End-to-End Conceptual Guarding of Neural Architectures\u201d under Security for all in an AI enabled society, by the CRUK National Biomarker Centre, and supported by the Manchester Experimental Cancer Medicine Centre and the NIHR Manchester Biomedical Research Centre." + }, + { + "url": "http://arxiv.org/abs/2402.10767v1", + "title": "Inference to the Best Explanation in Large Language Models", + "abstract": "While Large Language Models (LLMs) have found success in real-world\napplications, their underlying explanatory process is still poorly understood.\nThis paper proposes IBE-Eval, a framework inspired by philosophical accounts on\nInference to the Best Explanation (IBE) to advance the interpretation and\nevaluation of LLMs' explanations. IBE-Eval estimates the plausibility of\nnatural language explanations through a combination of explicit logical and\nlinguistic features including: consistency, parsimony, coherence, and\nuncertainty. Extensive experiments are conducted on Causal Question Answering\n(CQA), where \\textit{IBE-Eval} is tasked to select the most plausible causal\nexplanation amongst competing ones generated by LLMs (i.e., GPT 3.5 and Llama\n2). The experiments reveal that IBE-Eval can successfully identify the best\nexplanation with up to 77\\% accuracy ($\\approx 27\\%$ above random), improving\nupon a GPT 3.5-as-a-Judge baseline ($\\approx+17\\%$) while being intrinsically\nmore efficient and interpretable. Additional analyses suggest that, despite\nmodel-specific variances, LLM-generated explanations tend to conform to IBE\ncriteria and that IBE-Eval is significantly correlated with human judgment,\nopening up opportunities for future development of automated explanation\nverification tools.", + "authors": "Dhairya Dalal, Marco Valentino, Andr\u00e9 Freitas, Paul Buitelaar", + "published": "2024-02-16", + "updated": "2024-02-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2.7" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.01904v2", + "title": "REFINER: Reasoning Feedback on Intermediate Representations", + "abstract": "Language models (LMs) have recently shown remarkable performance on reasoning\ntasks by explicitly generating intermediate inferences, e.g., chain-of-thought\nprompting. However, these intermediate inference steps may be inappropriate\ndeductions from the initial context and lead to incorrect final predictions.\nHere we introduce REFINER, a framework for finetuning LMs to explicitly\ngenerate intermediate reasoning steps while interacting with a critic model\nthat provides automated feedback on the reasoning. Specifically, the critic\nprovides structured feedback that the reasoning LM uses to iteratively improve\nits intermediate arguments. Empirical evaluations of REFINER on three diverse\nreasoning tasks show significant improvements over baseline LMs of comparable\nscale. Furthermore, when using GPT-3.5 or ChatGPT as the reasoner, the trained\ncritic significantly improves reasoning without finetuning the reasoner.\nFinally, our critic model is trained without expensive human-in-the-loop data\nbut can be substituted with humans at inference time.", + "authors": "Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, Boi Faltings", + "published": "2023-04-04", + "updated": "2024-02-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.15164v2", + "title": "LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers", + "abstract": "Logical reasoning, i.e., deductively inferring the truth value of a\nconclusion from a set of premises, is an important task for artificial\nintelligence with wide potential impacts on science, mathematics, and society.\nWhile many prompting-based strategies have been proposed to enable Large\nLanguage Models (LLMs) to do such reasoning more effectively, they still appear\nunsatisfactory, often failing in subtle and unpredictable ways. In this work,\nwe investigate the validity of instead reformulating such tasks as modular\nneurosymbolic programming, which we call LINC: Logical Inference via\nNeurosymbolic Computation. In LINC, the LLM acts as a semantic parser,\ntranslating premises and conclusions from natural language to expressions in\nfirst-order logic. These expressions are then offloaded to an external theorem\nprover, which symbolically performs deductive inference. Leveraging this\napproach, we observe significant performance gains on FOLIO and a balanced\nsubset of ProofWriter for three different models in nearly all experimental\nconditions we evaluate. On ProofWriter, augmenting the comparatively small\nopen-source StarCoder+ (15.5B parameters) with LINC even outperforms GPT-3.5\nand GPT-4 with Chain-of-Thought (CoT) prompting by an absolute 38% and 10%,\nrespectively. When used with GPT-4, LINC scores 26% higher than CoT on\nProofWriter while performing comparatively on FOLIO. Further analysis reveals\nthat although both methods on average succeed roughly equally often on this\ndataset, they exhibit distinct and complementary failure modes. We thus provide\npromising evidence for how logical reasoning over natural language can be\ntackled through jointly leveraging LLMs alongside symbolic provers. All\ncorresponding code is publicly available at https://github.com/benlipkin/linc", + "authors": "Theo X. Olausson, Alex Gu, Benjamin Lipkin, Cedegao E. Zhang, Armando Solar-Lezama, Joshua B. Tenenbaum, Roger Levy", + "published": "2023-10-23", + "updated": "2024-02-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.08844v2", + "title": "RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs", + "abstract": "Despite their unprecedented success, even the largest language models make\nmistakes. Similar to how humans learn and improve using feedback, previous work\nproposed providing language models with natural language feedback to guide them\nin repairing their outputs. Because human-generated critiques are expensive to\nobtain, researchers have devised learned critique generators in lieu of human\ncritics while assuming one can train downstream models to utilize generated\nfeedback. However, this approach does not apply to black-box or limited access\nmodels such as ChatGPT, as they cannot be fine-tuned. Moreover, in the era of\nlarge general-purpose language agents, fine-tuning is neither computationally\nnor spatially efficient as it results in multiple copies of the network. In\nthis work, we introduce RL4F (Reinforcement Learning for Feedback), a\nmulti-agent collaborative framework where the critique generator is trained to\nmaximize end-task performance of GPT-3, a fixed model more than 200 times its\nsize. RL4F produces critiques that help GPT-3 revise its outputs. We study\nthree datasets for action planning, summarization and alphabetization and show\nrelative improvements up to 10% in multiple text similarity metrics over other\nlearned, retrieval-augmented or prompting-based critique generators.", + "authors": "Afra Feyza Aky\u00fcrek, Ekin Aky\u00fcrek, Aman Madaan, Ashwin Kalyan, Peter Clark, Derry Wijaya, Niket Tandon", + "published": "2023-05-15", + "updated": "2023-07-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.12295v2", + "title": "Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning", + "abstract": "Large Language Models (LLMs) have shown human-like reasoning abilities but\nstill struggle with complex logical problems. This paper introduces a novel\nframework, Logic-LM, which integrates LLMs with symbolic solvers to improve\nlogical problem-solving. Our method first utilizes LLMs to translate a natural\nlanguage problem into a symbolic formulation. Afterward, a deterministic\nsymbolic solver performs inference on the formulated problem. We also introduce\na self-refinement module, which utilizes the symbolic solver's error messages\nto revise symbolic formalizations. We demonstrate Logic-LM's effectiveness on\nfive logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO,\nLogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant\nperformance boost of 39.2% over using LLM alone with standard prompting and\n18.4% over LLM with chain-of-thought prompting. Our findings suggest that\nLogic-LM, by combining LLMs with symbolic logic, offers a promising avenue for\nfaithful logical reasoning. Code and data are publicly available at\nhttps://github.com/teacherpeterpan/Logic-LLM.", + "authors": "Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang", + "published": "2023-05-20", + "updated": "2023-10-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.12813v3", + "title": "Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback", + "abstract": "Large language models (LLMs), such as ChatGPT, are able to generate\nhuman-like, fluent responses for many downstream tasks, e.g., task-oriented\ndialog and question answering. However, applying LLMs to real-world,\nmission-critical applications remains challenging mainly due to their tendency\nto generate hallucinations and their inability to use external knowledge. This\npaper proposes a LLM-Augmenter system, which augments a black-box LLM with a\nset of plug-and-play modules. Our system makes the LLM generate responses\ngrounded in external knowledge, e.g., stored in task-specific databases. It\nalso iteratively revises LLM prompts to improve model responses using feedback\ngenerated by utility functions, e.g., the factuality score of a LLM-generated\nresponse. The effectiveness of LLM-Augmenter is empirically validated on two\ntypes of scenarios, task-oriented dialog and open-domain question answering.\nLLM-Augmenter significantly reduces ChatGPT's hallucinations without\nsacrificing the fluency and informativeness of its responses. We make the\nsource code and models publicly available.", + "authors": "Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, Jianfeng Gao", + "published": "2023-02-24", + "updated": "2023-03-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.08844v2", + "title": "RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs", + "abstract": "Despite their unprecedented success, even the largest language models make\nmistakes. Similar to how humans learn and improve using feedback, previous work\nproposed providing language models with natural language feedback to guide them\nin repairing their outputs. Because human-generated critiques are expensive to\nobtain, researchers have devised learned critique generators in lieu of human\ncritics while assuming one can train downstream models to utilize generated\nfeedback. However, this approach does not apply to black-box or limited access\nmodels such as ChatGPT, as they cannot be fine-tuned. Moreover, in the era of\nlarge general-purpose language agents, fine-tuning is neither computationally\nnor spatially efficient as it results in multiple copies of the network. In\nthis work, we introduce RL4F (Reinforcement Learning for Feedback), a\nmulti-agent collaborative framework where the critique generator is trained to\nmaximize end-task performance of GPT-3, a fixed model more than 200 times its\nsize. RL4F produces critiques that help GPT-3 revise its outputs. We study\nthree datasets for action planning, summarization and alphabetization and show\nrelative improvements up to 10% in multiple text similarity metrics over other\nlearned, retrieval-augmented or prompting-based critique generators.", + "authors": "Afra Feyza Aky\u00fcrek, Ekin Aky\u00fcrek, Aman Madaan, Ashwin Kalyan, Peter Clark, Derry Wijaya, Niket Tandon", + "published": "2023-05-15", + "updated": "2023-07-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.05128v2", + "title": "Teaching Large Language Models to Self-Debug", + "abstract": "Large language models (LLMs) have achieved impressive performance on code\ngeneration. However, for complex programming tasks, generating the correct\nsolution in one go becomes challenging, thus some prior works have designed\nprogram repair approaches to improve code generation performance. In this work,\nwe propose Self-Debugging, which teaches a large language model to debug its\npredicted program via few-shot demonstrations. In particular, we demonstrate\nthat Self-Debugging can teach the large language model to perform rubber duck\ndebugging; i.e., without any human feedback on the code correctness or error\nmessages, the model is able to identify its mistakes by investigating the\nexecution results and explaining the generated code in natural language.\nSelf-Debugging achieves the state-of-the-art performance on several code\ngeneration benchmarks, including the Spider dataset for text-to-SQL generation,\nTransCoder for C++-to-Python translation, and MBPP for text-to-Python\ngeneration. On the Spider benchmark where there are no unit tests to verify the\ncorrectness of predictions, Self-Debugging with code explanation consistently\nimproves the baseline by 2-3%, and improves the prediction accuracy on problems\nof the hardest level by 9%. On TransCoder and MBPP where unit tests are\navailable, Self-Debugging improves the baseline accuracy by up to 12%.\nMeanwhile, by leveraging feedback messages and reusing failed predictions,\nSelf-Debugging notably improves sample efficiency, and can match or outperform\nbaseline models that generate more than 10x candidate programs.", + "authors": "Xinyun Chen, Maxwell Lin, Nathanael Sch\u00e4rli, Denny Zhou", + "published": "2023-04-11", + "updated": "2023-10-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14002v1", + "title": "Improving Language Models via Plug-and-Play Retrieval Feedback", + "abstract": "Large language models (LLMs) exhibit remarkable performance across various\nNLP tasks. However, they often generate incorrect or hallucinated information,\nwhich hinders their practical applicability in real-world scenarios. Human\nfeedback has been shown to effectively enhance the factuality and quality of\ngenerated content, addressing some of these limitations. However, this approach\nis resource-intensive, involving manual input and supervision, which can be\ntime-consuming and expensive. Moreover, it cannot be provided during inference,\nfurther limiting its practical utility in dynamic and interactive applications.\nIn this paper, we introduce ReFeed, a novel pipeline designed to enhance LLMs\nby providing automatic retrieval feedback in a plug-and-play framework without\nthe need for expensive fine-tuning. ReFeed first generates initial outputs,\nthen utilizes a retrieval model to acquire relevant information from large\ndocument collections, and finally incorporates the retrieved information into\nthe in-context demonstration for output refinement, thereby addressing the\nlimitations of LLMs in a more efficient and cost-effective manner. Experiments\non four knowledge-intensive benchmark datasets demonstrate our proposed ReFeed\ncould improve over +6.0% under zero-shot setting and +2.5% under few-shot\nsetting, compared to baselines without using retrieval feedback.", + "authors": "Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, Ashish Sabharwal", + "published": "2023-05-23", + "updated": "2023-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.00745v1", + "title": "Enhancing Ethical Explanations of Large Language Models through Iterative Symbolic Refinement", + "abstract": "An increasing amount of research in Natural Language Inference (NLI) focuses\non the application and evaluation of Large Language Models (LLMs) and their\nreasoning capabilities. Despite their success, however, LLMs are still prone to\nfactual errors and inconsistencies in their explanations, offering limited\ncontrol and interpretability for inference in complex domains. In this paper,\nwe focus on ethical NLI, investigating how hybrid neuro-symbolic techniques can\nenhance the logical validity and alignment of ethical explanations produced by\nLLMs. Specifically, we present an abductive-deductive framework named\nLogic-Explainer, which integrates LLMs with an external backward-chaining\nsolver to refine step-wise natural language explanations and jointly verify\ntheir correctness, reduce incompleteness and minimise redundancy. An extensive\nempirical analysis demonstrates that Logic-Explainer can improve explanations\ngenerated via in-context learning methods and Chain-of-Thought (CoT) on\nchallenging ethical NLI tasks, while, at the same time, producing formal proofs\ndescribing and supporting models' reasoning. As ethical NLI requires\ncommonsense reasoning to identify underlying moral violations, our results\nsuggest the effectiveness of neuro-symbolic methods for multi-step NLI more\nbroadly, opening new opportunities to enhance the logical consistency,\nreliability, and alignment of LLMs.", + "authors": "Xin Quan, Marco Valentino, Louise A. Dennis, Andr\u00e9 Freitas", + "published": "2024-02-01", + "updated": "2024-02-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.04910v2", + "title": "Baldur: Whole-Proof Generation and Repair with Large Language Models", + "abstract": "Formally verifying software properties is a highly desirable but\nlabor-intensive task. Recent work has developed methods to automate formal\nverification using proof assistants, such as Coq and Isabelle/HOL, e.g., by\ntraining a model to predict one proof step at a time, and using that model to\nsearch through the space of possible proofs. This paper introduces a new method\nto automate formal verification: We use large language models, trained on\nnatural language text and code and fine-tuned on proofs, to generate whole\nproofs for theorems at once, rather than one step at a time. We combine this\nproof generation model with a fine-tuned repair model to repair generated\nproofs, further increasing proving power. As its main contributions, this paper\ndemonstrates for the first time that: (1) Whole-proof generation using\ntransformers is possible and is as effective as search-based techniques without\nrequiring costly search. (2) Giving the learned model additional context, such\nas a prior failed proof attempt and the ensuing error message, results in proof\nrepair and further improves automated proof generation. (3) We establish a new\nstate of the art for fully automated proof synthesis. We reify our method in a\nprototype, Baldur, and evaluate it on a benchmark of 6,336 Isabelle/HOL\ntheorems and their proofs. In addition to empirically showing the effectiveness\nof whole-proof generation, repair, and added context, we show that Baldur\nimproves on the state-of-the-art tool, Thor, by automatically generating proofs\nfor an additional 8.7% of the theorems. Together, Baldur and Thor can prove\n65.7% of the theorems fully automatically. This paper paves the way for new\nresearch into using large language models for automating formal verification.", + "authors": "Emily First, Markus N. Rabe, Talia Ringer, Yuriy Brun", + "published": "2023-03-08", + "updated": "2023-03-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.LO", + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.12615v1", + "title": "Autoformalization with Large Language Models", + "abstract": "Autoformalization is the process of automatically translating from natural\nlanguage mathematics to formal specifications and proofs. A successful\nautoformalization system could advance the fields of formal verification,\nprogram synthesis, and artificial intelligence. While the long-term goal of\nautoformalization seemed elusive for a long time, we show large language models\nprovide new prospects towards this goal. We make the surprising observation\nthat LLMs can correctly translate a significant portion ($25.3\\%$) of\nmathematical competition problems perfectly to formal specifications in\nIsabelle/HOL. We demonstrate the usefulness of this process by improving a\npreviously introduced neural theorem prover via training on these\nautoformalized theorems. Our methodology results in a new state-of-the-art\nresult on the MiniF2F theorem proving benchmark, improving the proof rate from\n$29.6\\%$ to $35.2\\%$.", + "authors": "Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy", + "published": "2022-05-25", + "updated": "2022-05-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.LO", + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.13312v1", + "title": "LeanReasoner: Boosting Complex Logical Reasoning with Lean", + "abstract": "Large language models (LLMs) often struggle with complex logical reasoning\ndue to logical inconsistencies and the inherent difficulty of such reasoning.\nWe use Lean, a theorem proving framework, to address these challenges. By\nformalizing logical reasoning problems into theorems within Lean, we can solve\nthem by proving or disproving the corresponding theorems. This method reduces\nthe risk of logical inconsistencies with the help of Lean's symbolic solver. It\nalso enhances our ability to treat complex reasoning tasks by using Lean's\nextensive library of theorem proofs. Our method achieves state-of-the-art\nperformance on the FOLIO dataset and achieves performance near this level on\nProofWriter. Notably, these results were accomplished by fine-tuning on fewer\nthan 100 in-domain samples for each dataset.", + "authors": "Dongwei Jiang, Marcio Fonseca, Shay B. Cohen", + "published": "2024-03-20", + "updated": "2024-03-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14623v2", + "title": "Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models", + "abstract": "Fact-checking is an essential task in NLP that is commonly utilized for\nvalidating the factual accuracy of claims. Prior work has mainly focused on\nfine-tuning pre-trained languages models on specific datasets, which can be\ncomputationally intensive and time-consuming. With the rapid development of\nlarge language models (LLMs), such as ChatGPT and GPT-3, researchers are now\nexploring their in-context learning capabilities for a wide range of tasks. In\nthis paper, we aim to assess the capacity of LLMs for fact-checking by\nintroducing Self-Checker, a framework comprising a set of plug-and-play modules\nthat facilitate fact-checking by purely prompting LLMs in an almost zero-shot\nsetting. This framework provides a fast and efficient way to construct\nfact-checking systems in low-resource environments. Empirical results\ndemonstrate the potential of Self-Checker in utilizing LLMs for fact-checking.\nHowever, there is still significant room for improvement compared to SOTA\nfine-tuned models, which suggests that LLM adoption could be a promising\napproach for future fact-checking research.", + "authors": "Miaoran Li, Baolin Peng, Michel Galley, Jianfeng Gao, Zhu Zhang", + "published": "2023-05-24", + "updated": "2024-04-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.05128v2", + "title": "Teaching Large Language Models to Self-Debug", + "abstract": "Large language models (LLMs) have achieved impressive performance on code\ngeneration. However, for complex programming tasks, generating the correct\nsolution in one go becomes challenging, thus some prior works have designed\nprogram repair approaches to improve code generation performance. In this work,\nwe propose Self-Debugging, which teaches a large language model to debug its\npredicted program via few-shot demonstrations. In particular, we demonstrate\nthat Self-Debugging can teach the large language model to perform rubber duck\ndebugging; i.e., without any human feedback on the code correctness or error\nmessages, the model is able to identify its mistakes by investigating the\nexecution results and explaining the generated code in natural language.\nSelf-Debugging achieves the state-of-the-art performance on several code\ngeneration benchmarks, including the Spider dataset for text-to-SQL generation,\nTransCoder for C++-to-Python translation, and MBPP for text-to-Python\ngeneration. On the Spider benchmark where there are no unit tests to verify the\ncorrectness of predictions, Self-Debugging with code explanation consistently\nimproves the baseline by 2-3%, and improves the prediction accuracy on problems\nof the hardest level by 9%. On TransCoder and MBPP where unit tests are\navailable, Self-Debugging improves the baseline accuracy by up to 12%.\nMeanwhile, by leveraging feedback messages and reusing failed predictions,\nSelf-Debugging notably improves sample efficiency, and can match or outperform\nbaseline models that generate more than 10x candidate programs.", + "authors": "Xinyun Chen, Maxwell Lin, Nathanael Sch\u00e4rli, Denny Zhou", + "published": "2023-04-11", + "updated": "2023-10-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.12426v1", + "title": "MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models", + "abstract": "Language Models (LMs) have shown impressive performance in various natural\nlanguage tasks. However, when it comes to natural language reasoning, LMs still\nface challenges such as hallucination, generating incorrect intermediate\nreasoning steps, and making mathematical errors. Recent research has focused on\nenhancing LMs through self-improvement using feedback. Nevertheless, existing\napproaches relying on a single generic feedback source fail to address the\ndiverse error types found in LM-generated reasoning chains. In this work, we\npropose Multi-Aspect Feedback, an iterative refinement framework that\nintegrates multiple feedback modules, including frozen LMs and external tools,\neach focusing on a specific error category. Our experimental results\ndemonstrate the efficacy of our approach to addressing several errors in the\nLM-generated reasoning chain and thus improving the overall performance of an\nLM in several reasoning tasks. We see a relative improvement of up to 20% in\nMathematical Reasoning and up to 18% in Logical Entailment.", + "authors": "Deepak Nathani, David Wang, Liangming Pan, William Yang Wang", + "published": "2023-10-19", + "updated": "2023-10-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.07662v4", + "title": "NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning", + "abstract": "Our goal is a modern approach to answering questions via systematic reasoning\nwhere answers are supported by human interpretable proof trees grounded in an\nNL corpus of authoritative facts. Such a system would help alleviate the\nchallenges of interpretability and hallucination with modern LMs, and the lack\nof grounding of current explanation methods (e.g., Chain-of-Thought). This\npaper proposes a new take on Prolog-based inference engines, where we replace\nhandcrafted rules with a combination of neural language modeling, guided\ngeneration, and semiparametric dense retrieval. Our implementation, NELLIE, is\nthe first system to demonstrate fully interpretable, end-to-end grounded QA as\nentailment tree proof search, going beyond earlier work explaining\nknown-to-be-true facts from text. In experiments, NELLIE outperforms a\nsimilar-sized state-of-the-art reasoner [Tafjord et al., 2022] while producing\nknowledge-grounded explanations. We also find NELLIE can exploit both\nsemi-structured and NL text corpora to guide reasoning. Together these suggest\na new way to jointly reap the benefits of both modern neural methods and\ntraditional symbolic reasoning.", + "authors": "Nathaniel Weir, Peter Clark, Benjamin Van Durme", + "published": "2022-09-16", + "updated": "2023-12-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + } +] \ No newline at end of file