diff --git "a/related_53K/test_related_long_2404.18130v1.json" "b/related_53K/test_related_long_2404.18130v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.18130v1.json" @@ -0,0 +1,8607 @@ +[ + { + "url": "http://arxiv.org/abs/2404.18130v1", + "title": "Logic Agent: Enhancing Validity with Logic Rule Invocation", + "abstract": "Chain-of-Thought (CoT) prompting has emerged as a pivotal technique for\naugmenting the inferential capabilities of language models during reasoning\ntasks. Despite its advancements, CoT often grapples with challenges in\nvalidating reasoning validity and ensuring informativeness. Addressing these\nlimitations, this paper introduces the Logic Agent (LA), an agent-based\nframework aimed at enhancing the validity of reasoning processes in Large\nLanguage Models (LLMs) through strategic logic rule invocation. Unlike\nconventional approaches, LA transforms LLMs into logic agents that dynamically\napply propositional logic rules, initiating the reasoning process by converting\nnatural language inputs into structured logic forms. The logic agent leverages\na comprehensive set of predefined functions to systematically navigate the\nreasoning process. This methodology not only promotes the structured and\ncoherent generation of reasoning constructs but also significantly improves\ntheir interpretability and logical coherence. Through extensive\nexperimentation, we demonstrate LA's capacity to scale effectively across\nvarious model sizes, markedly improving the precision of complex reasoning\nacross diverse tasks.", + "authors": "Hanmeng Liu, Zhiyang Teng, Chaoli Zhang, Yue Zhang", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Traditional pre-trained models have primarily tackled logical reasoning through statistical training, a connectionist approach that often misinterprets the complexity of language. Similarly, formal symbolic systems, while precise, struggle with the adaptability needed for diverse linguistic phenomena. This backdrop sets the stage for the introduction of new approaches to complex reasoning in Large Language Models (LLMs). Reasoning Paradigms in Large Language Model Prompting: The development of fewshot (Wei et al., 2022) and zero-shot (Kojima et al., 2022) Chain-of-Thought prompting has been instrumental in enabling LLMs to tackle complex reasoning tasks. Subsequent developments have introduced varied data structures, such as Tree-of-Thought (Yao et al., 2023), Graph-ofThought (Besta et al., 2024), and Program-ofThought (Chen et al., 2022), enhancing LLMs\u2019 capabilities to reflect on and evaluate their reasoning processes. Moving beyond basic prompting strategies, the ReAct model (Yao et al., 2022) intertwines reasoning with actionable tasks like search, while the Selection-Inference framework (Creswell et al., 2023) employs a two-step process of context formation and logical chaining. Although these approaches parallel ours in process structure, they do not incorporate explicit logical rules, and the chaining mechanism is entirely modeldependent. The use of external tools within prompting paradigms, particularly for tasks necessitating additional knowledge, represents another significant advancement. In mathematical reasoning, tools such as calculators have proven invaluable. Analogously, in our methodology, predefined functions for applying inference rules are akin to external tools, a concept previously unexplored in this context. Another paradigm shift in LLM prompting is the division of complex tasks into subproblems or the collaborative engagement of diverse models. Cumulative reasoning (Zhang et al., 2023) adopts a streamlined, iterative approach utilizing distinct LLMs as AI agents; ScratchPad (Nye et al., 2021) contributes to multi-step reasoning by revealing intermediate steps; Meta-prompting (Suzgun and Contrapositive Law P\u2192Q\u2194\u00acQ\u2192\u00acP Transitive Law P\u2192Q,Q\u2192R\u2194P\u2192R De_Morgan\u2019s Law \u00ac (P \u2228 Q) \u2194 \u00ac P \u2227 \u00ac Q , \u00ac (P \u2227 Q) \u2194 \u00ac P \u2228 \u00ac Q Contradictory Relationships A\u2194\u00acO,O\u2194\u00acA,E\u2194\u00acI,I\u2194\u00acE Contrary Relationships (Upper Contrary) A\u2192\u00acE,E\u2192\u00acA Subcontrary Relationships (Lower Contrary) \u00acI\u2192O,\u00acO\u2192I Subalternation A\u2192I,E\u2192O,\u00acI\u2192\u00acA,\u00acO\u2192\u00acE Origin: Context: If the Moon's surface was once a magma ocean, then \u2026 Parsed: Implies( Atom(Moon's surface was once a magma ocean), Atom(the distribution of many elements on it should be continuous) ) Implies( Atom(existence of a magma ocean is confirmed), Atom(the 'Giant Impact Hypothesis' becomes the most plausible explanation for the Moon's origin) ) \u2026 Option A Implies(Not(Atom(Moon's surface was once a magma ocean)), Not(Atom(the distribution of many elements on it should be continuous)) ) Given the original statement Implies( Atom(Moon's surface was once a magma ocean), Atom(the distribution of many elements on it should be continuous) ), applying Contrapositive(Atom(the distribution of many elements on it should be continuous), Atom(Moon's surface was once a magma ocean), ):Implies( Not(Atom(Moon's surface was once a magma ocean)), Not(Atom(the distribution of many elements on it should be continuous))), Option A can not be inferred from the contrapositive law. \u2026 Parsed Logic Logic rules Guided Generation Figure 2: The architecture of the LA framework. Highlighted texts are the output of pre-defined functions. Kalai, 2024) envisions LLMs as orchestrators in a collaborative environment, responsible for decomposing complex tasks, delegating sub-tasks to specialized models, facilitating inter-model communication, and applying critical analysis throughout. Our approach similarly harnesses the LLMs\u2019 decision-making capability in selecting appropriate inference rules, aligning with this broader trend of utilizing LLMs for complex, collaborative reasoning processes. Unlike previous attempts, we leverage the computational power and contextual understanding of LLMs to act as agents that dynamically invoke logic rules. This integration enables the LLMs to not only process language with their inherent sophistication but also apply logical reasoning in a structured and accurate manner, akin to utilizing a calculator for mathematical enhancements. Apart from that, recent studies have explored instruct-tuning Large Language Models (LLMs) with specific datasets to enhance their abstract reasoning capabilities. LogiCoT (Liu et al., 2023) fine-tunes an LLAMA-7B model using logical chaining data, demonstrating significant improvements across various logical reasoning tasks; LogicLLM (Jiao et al., 2023) employs a self-supervised post-training approach tailored for logical reasoning enhancements; Symbol-LLM (Xu et al., 2023) leverages symbolic data within a two-stage tuning framework to imbue a LLAMA2-CHAT model with symbolic knowledge. While these approaches underscore the potential of finetuning strategies in augmenting LLMs, our work distinguishes itself as the first to specifically address and enhance logical reasoning capabilities at the decoding stage, employing a multi-agent strategy to elevate the process. Formal Reasoning: Formal reasoning systems have primarily been developed to address mathematical challenges. Peano (Poesia and Goodman, 2023), designed to solve educational mathematical problems, employs dependent types to encode mathematical definitions and proofs, echoing the structured approach in our work. Yet, our focus diverges towards logical reasoning scenarios, an area where systems like Peano have traditionally been less potent. Addressing formal logical reasoning, LINC (Olausson et al., 2023) leverages LLMs as FOL language translators to attain formal representations of contextual information, complemented by traditional theorem provers for validation. LINC\u2019s approach, which employs a voting strategy to resolve inconsistencies in FOL language generation, contrasts with our method which adopts a more flexible propositional logic to distill the abstract essence of context while meticulously controlling the validity of generative reasoning. Furthermore, the exploration of language models as theorem provers has introduced systems like LangPro (Abzianidze, 2017), a natural language theorem prover that harnesses higher-order logic to assess linguistic expressions\u2019 consistency. LangPro\u2019s reliance on CCG parsing and a dedicated knowledge base for generating Lambda Logical Forms (LLFs) presents a contrast to our work, which utilizes propositional logic, thereby circumventing the need for a theorem-proving knowledge base. In parallel, semantic-constrained decoding techniques, as exemplified by NEUROLOGIC DECODING (Lu et al., 2020), enable language models to generate contextually coherent text while adhering to complex lexical constraints. Our approach resonates with this paradigm, albeit with a distinct focus on employing constrained generation paired with guided deduction rules, thereby carving a unique niche in the landscape of formal reasoning and logical inference.", + "pre_questions": [], + "main_content": "Introduction The quest for augmenting the reasoning capabilities of language models has been a focal point of recent advancements in the evolving landscape of artificial intelligence. Chain-of-Thought (CoT) prompting (Wei et al., 2022; Kojima et al., 2022; Wang et al., 2022b; Chu et al., 2023) marked a significant stride in this journey, revealing the potential of large language models (LLMs) to mimic human-like reasoning processes. These advancements have led to remarkable achievements, with LLMs demonstrating proficiency in a variety of competitive examinations, including those focused on mathematics (Li et al., 2022; He-Yueya et al., Context: If the Moon's surface was once a magma ocean, then the distribution of many elements on it should be continuous. The magma ocean holds the key to unlocking the mystery of the Moon's origin. If the existence of a magma ocean is confirmed, then the 'Giant Impact Hypothesis' becomes the most plausible explanation for the Moon's origin. Question: From this, we can infer: Options: A. If the Moon's surface was never a magma ocean, then the distribution of elements on its surface is not continuous. B. If the 'Giant Impact Hypothesis' is not the most plausible explanation for the Moon's origin, then it indicates that the distribution of elements on the Moon's surface is not continuous. C. If the distribution of elements on the Moon's surface is not continuous, then the Moon's surface was never a magma ocean. D. If the distribution of elements on the Moon's surface is continuous, then the 'Giant Impact Hypothesis' will become the most plausible. Atoms: P: Moon's surface was a magma ocean Q: Continuous distribution of elements R: the 'Giant Impact Hypothesis' is the most plausible explanation Implies: P \u2192 Q P \u2192 R External guides: \u00ac Q \u2192 \u00ac P \u00ac R \u2192 \u00ac P \u00ac P \u2192 \u00ac Q \u00ac P \u2192 \u00ac R Contrapositive Law Invalid GPT-4 CoT Freeform: \u2026 Now, let's translate each of the given inferences: A. \u00acP\u2192\u00acQ This is the contrapositive of the first part of the ori ginal statement (P\u2192Q), so it is logically equivalent and true. \u2026 GPT-4 LA Guided: \u2026 Given the original statement (P\u2192Q), according to Contrapositive(P\u2192Q): \u00ac Q \u2192 \u00ac P, so the inference in option A is invalid. \u2026 Figure 1: An example of logical reasoning problems in competitive exams. GPT-4 can handle abstract logical reasoning, however, it fails to conduct a valid inference chain. 2023; Imani et al., 2023) and reading comprehension (Wang et al., 2023; Xiao et al., 2023). However, despite its implications in various reasoning tasks, CoT has faced limitations, particularly in validating reasoning and ensuring the informativeness of its outputs. (Lanham et al., 2023) Their performance in logical reasoning tasks, a critical component of examinations like the Law School Admission Test (LSAT) and Chinese civil service selection exams, remains notably inferior to that of well-trained humans (liu et al., 2023). Figure 1 shows an example of such questions. Crafted by experts to challenge human logical reasoning abilities, they require a valid and rule-bound chain of logic that is often non-trivial to discern. Testees must engage in abstract thinking, translating contexts into logical symbols and applying strict inference rules to form logical chains. This gap highlights a critical challenge: the ability of arXiv:2404.18130v1 [cs.AI] 28 Apr 2024 LLMs to consistently follow rules and verify the validity of logic chains. As illustrated in Figure 1, a GPT-4 model struggles with the deduction of the contrapositive law, despite having conceptual knowledge of it. One reason can be that there is no strict guarantee for a statistical system such as LLM to ensure correct complex reasoning chains across contexts. Inspired by the integration of the neural network with formal symbolic solvers (Azerbayev et al., 2023; Jiang et al., 2023; Thakur et al., 2023), tooluse (Gao et al., 2023; Schick et al., 2023; Paranjape et al., 2023), and constrained decoding (Du et al., 2020; Geng et al., 2024), we address this issue by introducing Logic Agent (LA), an agent-based constrained generation framework, leveraging propositional logic and inference rules as fundamental guides to constructing logically sound inference chains. LA is designed to steer Large Language Models (LLMs) toward a trajectory of enhanced logical coherence and interpretability by introducing symbolic reasoning. In particular, we let an LLM serve as a decision-making agent and make a callable symbolic reasoning agent by assembling a set of essential formal logical rules. The LLM agent is taught to make use of the symbolic reasoning agent in its instructions so that formal reasoning steps can be guaranteed strictly correct. With LA, LLMs are guided towards a path of logical coherence and interpretability. We first define the essentials of compositional logic, i.e. the logic components and syntax. This step serves as the initial step, converting complex natural language statements into structured compositional logic representations. Second, we define the functions for applying deduction rules, given a logic expression, we are able to form an inference chain with implicit logic. These functions are tools for LLMs to use. Lastly, we prompt LLMs to decide which rule to apply in different states. When LLMs call a rule, the output of the corresponding function is guaranteed to give valid logic chains for LLMs to make judgments on the truthfulness of the hypotheses. In our study, we rigorously evaluated the Logic Agent (LA) framework using a mix of commercial and open-source Large Language Models, including OpenAI\u2019s GPT-4 and various Hugging Face models. Our findings, across this diverse range of models, consistently highlight LA\u2019s effectiveness in enhancing logical reasoning in complex tasks. Alongside our experimental insights, we\u2019re releasing our code to contribute to ongoing research. To the best of our knowledge, this is the first initiative to integrate propositional logic into LLMs at such a scale. Distinctively, we encapsulate the logical reasoning process into callable function forms, packaging logic rules as tools for LLM agents. This strategic shift in leveraging LLMs as autonomous decisionmakers, equipped with a toolkit of generalized logic reasoning functions, marks a significant departure from existing models. Figure 2 presents the Logic Agent (LA) framework\u2019s architecture. Initially, natural language inputs undergo logic parsing on the left, resulting in structured logic forms (see Section 3.1). The center highlights the application of deduction rules for logical inference (see Section 3.2). Finally, on the right, the constrained generation process employs these inferences to produce contextually relevant and logically coherent outputs, illustrating the LA\u2019s systematic approach to enhancing reasoning in large language models (see Section 3.3). At the heart of LA lies the meticulous definition and utilization of compositional logic essentials, encompassing both the critical logic components and their associated syntax. This pivotal initial step involves the intricate transformation of complex natural language statements into structured representations of compositional logic. 3.1 Logical Construct Classes Within LA, various classes of logical constructs are parsed and utilized. These include: Variable: Represents a variable symbol in logical expressions. Atom: Denotes an atomic formula, the fundamental unit of logical statements. Not: Embodies the negation operation in logic. And: Indicates logical conjunction, combining multiple propositions. Or: Symbolizes logical disjunction, offering alternative propositions. Implies: Represents the implication relationship between propositions. Equiv: Denotes logical equivalence between statements. Exists and Forall: Represent existential and universal quantification, respectively, allowing for the expression of propositions about \u2018some\u2019 or \u2018all\u2019 entities within a domain. Rule-based functions within LA parse these logical constructs and quantified sentences, ensuring accurate representation and manipulation of logical expressions. 3.2 Inference Rules On this foundational layer, LA incorporates a suite of defined functions for applying various deduction rules. These functions serve as advanced tools for LLMs, facilitating the formation of inference chains that integrate both explicit and implicit logic elements. This enables LLMs to navigate the complexities of logical deduction, maintaining structured and coherent reasoning throughout. The key inference rules and their corresponding functions in LA include: Contrapositive: A function applying the contrapositive law, turning implications into their logically equivalent forms. Transitive: A function for the transitive law, linking propositions through a common term. De_Morgans: Implements De Morgan\u2019s laws, transforming conjunctions and disjunctions while preserving logical equivalence. We also integrate the foundational principles of categorical propositions, which is essential to syllogistic logic. There are four key proposition types: SAP (A) Universal Affirmative, SIP (I) Particular Affirmative, SEP (E) Universal Negative, and SOP (O) Particular Negative. Below are the corresponding functions: Contradictory: A function handling contradictory relationships, identifying mutually exclusive propositions. Contrary: Manages contrary relationships, where two propositions cannot be true simultaneously but can be false together. Subcontrary: Deals with subcontrary relationships, where two propositions cannot be false simultaneously but can be true together. Subalternation_forward and Subalternation_backward: Functions facilitating subalternation, capturing the inferential relationships between universal and particular propositions. Through these specialized functions, LA empowers LLMs to apply logical reasoning accurately and effectively, enhancing their capability to tackle complex reasoning tasks with a higher degree of precision and reliability. 3.3 Rule-Guided Generation We prompt LLMs to discern and decide upon the most appropriate rule to apply in varying states of reasoning. This dynamic interaction empowers LLMs to judiciously invoke the corresponding functions, each meticulously crafted to guarantee the generation of valid logic chains. Consequently, LLMs are equipped with a powerful mechanism to scrutinize the veracity of hypotheses, making informed judgments based on the logically consistent chains produced. We use in-context examples to demonstrate how these functions are called in Dataset Size Target ReClor dev 500 4-way multi-choice AR-LSAT test 230 5-way multi-choice LogiQA22 1,354 4-way multi-choice ConTRoL test 805 E, C, N NaN-NLI test 259 E, C, N RuleTaker dev 10,068 Yes, No ProofWriter dev 10,158 Yes, No Table 1: The statistics of the datasets. (\u201cE\u201d refers to \u201centailment\u201d; \u201cC\u201d refers to \u201ccontradiction\u201d; \u201cN\u201d refers to \u201cneutral\u201d.) the guided generation process and leverage the capabilities of existing LLMs developed by OpenAI and HuggingFace. These models offer a robust starting point, owing to their advanced language understanding and processing abilities. However, our approach goes beyond the conventional use of LLMs by optimizing each component for its specific role in the logical reasoning process. This targeted optimization is key to transcending the current limitations of LLMs in handling the nuanced and rule-bound nature of logical reasoning tests. By integrating a structured, rule-guided reasoning methodology into the operational framework of LLMs, LA aims to improve not only the logical precision of these models but also their interpretability and coherence. The incorporation of propositional logic, deduction rules, and a strategic prompting mechanism positions LA as an innovative approach. It seeks to bridge the current divide between the computational efficiency of LLMs and the detailed, logical discernment typical of human reasoning. 3.4 Tasks We consider various logical reasoning tasks, including Multi-Choice Reading Comprehension (MCRC), Natural Language Inference (NLI), and True-or-False questions (TF). The datasets we use are listed in Table 1. ReClor (Yu et al., 2020), AR-LSAT (Wang et al., 2022a), and LogiQA22 (liu et al., 2023) are three renowned multi-choice reading comprehension datasets for logical reasoning. ReClor and ARLSAT are collected from verbal reasoning questions in competitive tests like the LSAT (Law School Admission Test) exam. LogiQA22 is collected from the Chinese Civil Service Examination in the year 2022. ConTRoL (Liu et al., 2021) and NaN-NLI (Truong et al., 2022) are two logical reasoning datasets for the natural language inference task. The task is to decide whether a hypothesis can be logically entailed by the premises. ConTRoL features entailment relationships for long texts, and NaN-NLI is for negations. Both datasets are threeway classification tasks. RuleTaker (Clark et al., 2020) and ProofWriter (Tafjord et al., 2020) are two synthetic datasets widely used in formal logic reasoning. They take the form of yes-or-no questions, which are designed to test the ability of models to understand and apply rules and facts stated in natural language. The tasks are evaluated with few-shot prompting, we use three in-context examples, covering different inference rule scenarios. For the implementation, we use a series of models from the OpenAI suite, including DAVINCI-002, GPT-3.5-TURBO, and GPT-4. DAVINCI-002 is the GPT base model currently supported by OpenAI API. GPT-3.5TURBO and GPT-4 are two chat models available in the OpenAI API. Furthermore, we extend our evaluation to Huggingface models like LLAMA-2-13B (Touvron et al., 2023) and MIXTRAL-8X7B-V0.1 (Jiang et al., 2024), thereby encompassing a broad spectrum of AI models. LLAMA-2-13B is a 13B open LLM developed by Meta. MIXTRAL-8X7BV0.1 is a Mixture-of-Expert (MoE) model developed by MistralAI. This diverse selection includes both base and instruction-tuned models, covering a range of open-source and closed-source options, to provide a comprehensive overview of the capabilities and performance variations across different AI architectures in logical reasoning tasks. We use the guidance library 1 for implementing our rule-constrained generation framework. 4 Experiments We employ a diverse range of datasets and models to ensure a robust and thorough assessment of our framework. We detail our experimental setup, the metrics used for evaluation, and our main findings. 4.1 Experimental Setup Baselines: Our experimental baselines comprise two distinct approaches: direct answering and Chain-of-Thought (CoT) reasoning. To facilitate a fair comparison between base models and instruction-tuned models, we provide three incontext examples for both the direct answering and the CoT scenarios. This approach aids LLMs in generating answers that can be directly compared with the gold labels. 1https://github.com/guidance-ai/guidance Task MCRC NLI TF Dataset Reclor AR-LSAT LogiQA22 ConTRoL NaN-NLI RuleTaker ProofWriter Human avg. 63.00 56.00 83.00 87.00 94.00 84.00 82.00 Human Ceiling 100.00 91.00 99.00 94.00 100.00 95.00 93.00 GPT-3.5-Direct 56.28 51.31 41.14 57.94 56.86 55.33 54.68 GPT-3.5-CoT 56.90 51.45 42.92 58.29 55.54 55.88 53.02 GPT-3.5-LA 59.73 55.29 42.98 62.01 61.34 71.30 73.85 GPT-4-Direct 88.54 74.21 60.11 56.34 77.07 59.85 61.58 GPT-4-CoT 89.06 73.49 58.43 56.97 77.83 61.43 60.64 GPT-4-LA 89.47 77.28 60.67 58.93 80.66 65.84 68.42 Davinci-002-Direct 20.41 13.54 11.02 8.43 10.78 25.98 22.54 Davinci-002-CoT 19.43 18.85 13.27 13.61 15.34 26.84 27.33 Davinci-002-LA 27.45 22.60 30.68 15.58 24.73 32.10 33.54 LLaMA-2-Direct 17.31 12.70 18.55 20.12 22.08 25.50 23.39 LLaMA-2-CoT 15.62 13.76 16.03 21.75 25.44 22.39 23.16 LLaMA-2-LA 23.76 21.63 30.21 25.48 22.76 28.79 25.11 Mixtral-8x7b-Direct 48.92 41.40 38.97 50.84 50.13 46.84 44.80 Mixtral-8x7b-CoT 49.21 44.33 40.96 50.32 53.04 48.52 45.85 Mixtral-8x7b-LA 50.58 45.95 44.92 52.25 55.96 52.53 55.68 Table 2: Main results. All results are in %. Data preprocessing \u2022 For the Multiple-Choice Reading Comprehension (MCRC) task, we combine the context, question, and options to form a single input. \u2022 In Natural Language Inference tasks, premises and hypotheses are concatenated, with a distinct identifier prefacing each segment. \u2022 For True-or-False questions, we concatenate the context with the question to generate a cohesive input prompt. Metrics To assess the performance of LLMs in our experiments, we employ the exact-match metric. This involves prompting LLMs to generate answers either as the first token (direct answer) or at the end of the generation process (CoT and LA). The extracted answers are then compared with the gold labels to calculate the accuracy score. 4.2 Results The primary outcomes of our experiments are summarized in Table 2, where we juxtapose the performances of different models under various logical reasoning tasks. These tasks span multiple-choice reading comprehension (MCRC), natural language inference (NLI), and true-or-false (TF) questions, utilizing datasets such as ReClor, AR-LSAT, and LogiQA22 for MCRC, ConTRoL and NaN-NLI for NLI, and RuleTaker and ProofWriter for TF tasks. The human performance benchmarks, as referenced in the table, are sourced from prior research (Yu et al., 2020; Wang et al., 2022a; liu et al., 2023). Direct Answer vs. Chain-of-Thought (CoT): Our analysis reveals that, in the context of the logical reasoning tasks tested, the few-shot CoT approach marginally outperforms the direct answer methodology. However, this superiority is not uniform across all cases. In certain instances, the CoT method appears to detrimentally impact the overall results, suggesting limitations in the effectiveness of CoT prompting in some logical reasoning scenarios. This observation highlights the inherent challenge in using CoT prompting to navigate the complexities of logical reasoning, especially in tasks where intricate inference is required. Performance Across Models: Our further analysis delves into the performance distinctions across various models, highlighting the contrasts between advanced models such as GPT-4 and base models like DAVINCI-002 and LLAMA-2-13B. DAVINCI-002, as a base model, shows distinct performance characteristics under the LA framework. For instance, in the MCRC task on the ReClor dataset, DAVINCI-002 under LA achieves a 27.45% accuracy, a notable improvement from its Direct answer performance at 20.41%. This trend is consistent across other datasets, such as in LogiQA22, where DAVINCI-002\u2019s accuracy increases from 11.02% (Direct) to 30.68% (LA). These results suggest that the structured reasoning provided by LA can significantly enhance the logical reasoning abilities of even base models, enabling them to outperform their standard configurations. Similarly, LLAMA-2-13B, another base model, exhibits a marked performance enhancement with the application of LA. In the TF task using the RuleTaker dataset, LLAMA-2-13B registers an accuracy of 28.79% under LA, compared to 25.50% in the Direct answering format. In the more challenging ProofWriter dataset, the model improves from 23.39% (Direct) to 25.11% (LA). These improvements, while not as pronounced as those seen with advanced models like GPT-4, nonetheless indicate that LA can elevate the performance of base models in logical reasoning tasks. Comparatively, advanced models like GPT-4 demonstrate a more significant leap in performance with the LA approach. This is particularly evident in datasets that require complex logical deductions, such as ProofWriter, where GPT-4 with LA achieves a 68.42% accuracy, substantially higher than both its Direct (61.58%) and CoT (60.64%) counterparts. This comparative analysis across different models underscores the versatility of the LA framework. While advanced models like GPT-4 naturally exhibit higher baseline performances, the introduction of LA leads to substantial improvements in logical reasoning tasks across all model types, including base models like DAVINCI-002 and LLAMA-2-13B. This suggests that LA\u2019s structured, rule-guided reasoning approach is universally beneficial, enhancing the logical reasoning capabilities of a wide range of LLMs. LA\u2019s Efficacy: The implementation of LA consistently enhances accuracy across various datasets, underscoring its effectiveness in logical reasoning. In the TF tasks using the RuleTaker dataset, LA with GPT-3.5 achieves an impressive 71.30% accuracy, a substantial leap from the 55.33% in the Direct approach and 55.88% in the CoT approach. Similarly, in the ProofWriter dataset, GPT-3.5 with LA reaches 73.85% accuracy, outperforming both its Direct (54.68%) and CoT (53.02%) formats. These figures highlight LA\u2019s capability to significantly refine the reasoning process in LLMs, enabling them to handle complex logic with greater precision and reliability. The improvement is even more pronounced with advanced models like GPT4, where the accuracy in the RuleTaker dataset jumps to 65.84% under LA, compared to 59.85% (Direct) and 61.43% (CoT). This consistent pattern across various models and datasets firmly establishes LA as a transformative approach in logical reasoning, bridging the gap between computational AI and nuanced human-like reasoning. We present a detailed case study in Appendix A. This case study meticulously demonstrates how LA navigates complex logical reasoning tasks, showcasing its capabilities and the enhancements it brings to the decoding stage of Large Language Models (LLMs). Task-Specific Insights: Delving into taskspecific performances, we observe that LA aligns exceptionally well with the demands of MCRC and NLI tasks, as evidenced by GPT-4\u2019s superior performance in the ReClor and NaN-NLI datasets. The tailored application of LA\u2019s rule-based reasoning to each task\u2019s unique requirements elucidates its broad applicability and effectiveness. The differential performance uplifts across datasets highlight the adaptability of LA. For instance, the significant accuracy increase in the ProofWriter dataset for GPT-4 underscores LA\u2019s capacity to handle datasets requiring complex logical deductions. This adaptability is crucial for tailoring reasoning enhancements to specific task demands. 5 Discussion 5.1 GPT-4 as Logic Parser GPT-4, despite its occasional inconsistencies in generating new logical expressions, exhibits a noteworthy capability in parsing natural language into formal logic. This ability is particularly relevant to our \"Chain-of-Logic\" (LA) framework, where accurate translation of natural language into propositional logic is crucial. To harness GPT-4\u2019s parsing capabilities, we crafted specific prompts aimed at guiding the model to translate natural language statements into propositional logic forms. These forms are then seamlessly integrated into the deduction functions of LA. A critical requirement for this integration is the compatibility of GPT-4\u2019s output with our framework\u2019s syntax. Therefore, the prompts are designed not only to elicit the correct logical structures but also to ensure that these structures adhere to the syntax conventions of our default parser. To evaluate the effectiveness of GPT-4 in this role, we conducted experiments comparing its parsing capabilities with our default logic parser. The comparative results, as detailed in Table 3, demonDataset Default parser GPT-4 parser ReClor dev 59.73 60.65 AR-LSAT test 55.29 55.87 LogiQA22 42.98 44.07 ConTRoL 62.01 65.24 NaN-NLI 61.34 63.48 RuleTaker dev 71.30 71.45 ProofWriter dev 73.85 72.13 Table 3: GPT-3.5-TURBO model results with GPT-4 as the parser. strate a slight edge in performance when utilizing GPT-4 as a parser. This finding underscores the efficiency and accuracy of GPT-4 in interpreting and translating complex logical statements from natural language into formal logic constructs. However, it\u2019s important to consider the trade-offs involved. Utilizing GPT-4 as a parser introduces additional computational costs, and there may be instances of variability in the parsing quality. These factors necessitate a careful assessment of the costbenefit ratio, especially in scenarios where computational resources are a limiting factor or where absolute consistency in logic parsing is critical. Our findings suggest that while GPT-4 can effectively augment our framework as a neural parser, its integration should be strategically employed, taking into account the specific requirements and constraints of the given logical reasoning task. The potential of GPT-4 to enhance the versatility and adaptability of logical reasoning frameworks is clear, yet its application needs to be tempered with an understanding of its limitations and costs. 5.2 Ablation study An essential aspect of our research was to ascertain the specific contribution of the parsed logic within the LA method. To achieve this, we conducted an ablation study where we tested the impact of augmenting text with parsed logic on the direct answer approach, while deliberately omitting the constrained generation component integral to LA. This approach allowed us to isolate and understand the effectiveness of the logic parsing process in isolation. By comparing the performance of models using only parsed logic-augmented text for direct answering with their performance under the full LA framework, we could assess the incremental value added by the constrained generation aspect of LA. We choose one dataset from each task and use GPT-3.5-TURBO as the tested model. The results 0 20 40 60 80 AR-LSAT ConTRoL RuleTaker LA Parsed Logic Original text Figure 3: GPT-3.5-TURBO results on ablation test. are shown in Figure 3. Across the three datasets, we observed a noticeable decrease in accuracy when the models were deprived of the constrained generation process and relied solely on the parsed logicaugmented text. This decline in performance underscores the significance of the constrained generation component in the LA framework. It highlights that while the logic parsing capability is a valuable contributor to the model\u2019s overall performance, the full potential of LA is realized only when it is coupled with the sophisticated generation constraints that guide the model towards more logically coherent and accurate conclusions. 6 Conclusion In this study, we present Logic Agent (LA), an innovative framework guided by logic rules to enhance the logical reasoning capabilities of Large Language Models (LLMs). Our comprehensive experiments across various models and datasets demonstrate that LA, with its integration of propositional logic and deduction rules, consistently surpasses traditional reasoning approaches. Notably, it shows superior performance in tasks requiring intricate logical deductions, highlighting its potential to bridge the gap between AI computational power and human-like logical reasoning. The exploration of GPT-4 as a neural logic parser further reveals the feasibility and challenges of incorporating advanced LLMs within logical reasoning systems. Looking ahead, the refinement of LA for broader applications and its scalability remain pivotal areas for future research. In sum, the LA framework not only elevates the performance of LLMs in complex reasoning tasks but also paves the way for more sophisticated and interpretable AI reasoning capabilities. Limitations While the Logic Agent (LA) framework presents significant advancements in logical reasoning with Large Language Models (LLMs), it is important to acknowledge its limitations for a balanced understanding of our research. Dependency on Model Capability: The effectiveness of LA is partly contingent on the underlying capabilities of the LLMs used. This dependency indicates that the full potential of LA might be limited by the current state of LLM technology. Scope of Logical Reasoning: Currently, LA primarily focuses on propositional logic and certain deduction rules. Its applicability to other forms of logic, such as predicate logic or modal logic, has not been extensively explored, which might restrict its utility in more diverse logical reasoning scenarios. Generalizability: While the framework has shown promise across various datasets, the generalizability of LA to real-world scenarios or domains beyond those tested remains an area for future investigation.", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2404.15949v1", + "title": "Sequence can Secretly Tell You What to Discard", + "abstract": "Large Language Models (LLMs), despite their impressive performance on a wide\nrange of tasks, require significant GPU memory and consume substantial\ncomputational resources. In addition to model weights, the memory occupied by\nKV cache increases linearly with sequence length, becoming a main bottleneck\nfor inference. In this paper, we introduce a novel approach for optimizing the\nKV cache which significantly reduces its memory footprint. Through a\ncomprehensive investigation, we find that on LLaMA2 series models, (i) the\nsimilarity between adjacent tokens' query vectors is remarkably high, and (ii)\ncurrent query's attention calculation can rely solely on the attention\ninformation of a small portion of the preceding queries. Based on these\nobservations, we propose CORM, a KV cache eviction policy that dynamically\nretains important key-value pairs for inference without finetuning the model.\nWe validate that CORM reduces the inference memory usage of KV cache by up to\n70% without noticeable performance degradation across six tasks in LongBench.", + "authors": "Jincheng Dai, Zhuowei Huang, Haiyun Jiang, Chen Chen, Deng Cai, Wei Bi, Shuming Shi", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Attention Let x \u2208Rn\u00d7d denote the input embeddings from a sequence of n feature vectors of dimension d. The multi-head self-attention [17], as a core module of Transformer model, facilitates 2Let t denote sequence length, we count the proportion of keys which attention score larger than average score 1 t and denote it as r. The larger r is, the sparser the layer is. 2 contextual information interaction within each head in the following manner: Q = xWq, K = xWk, V = xWv, Attention(x) = softmax(QKT \u221adh ) \u00d7 V (1) Q, K, V represent the query, key, and value matrices, which are obtained by linearly mapping x using weight matrices Wq, Wk, and Wv \u2208Rd\u00d7dh, respectively. dh is the dimension of each individual head. KV Cache According to autoregressive paradigm, transformer decoder model predicts future tokens based on both previous and current tokens. Recalculating the key-value pairs for previous tokens at each decoding step is clearly an inefficient strategy. A common practice is to retain the key-value pairs of previous tokens for subsequent reuse. Thus, the consumption of KV cache becomes linearly correlated with the length of input sequence. When dealing with long contexts, however, the use of such a space-time trade-off approach may still pose challenges. Training Policies The advent of multi-query attention (MQA) [8] is to address the influence of attention heads on KV cache within multi-head attention (MHA) mechanism. It facilitates the sharing of the same set of keys and values among different heads to alleviate cache pressure. Grouped-query attention (GQA) [9] represents a trade-off between MHA and MQA, achieving key-value sharing within each group through mean-pooling-based uptraining. Both methods require additional training to restore model performance due to the inability to directly convert. Training-free Policies During generation, sequence length is the primary factor of cache pressure. Recent methods aim to balance model efficiency and inference cost without extra training and architectural changes. StreamingLLM [10] keeps attention sink token and recent tokens throughout decoding process to align with the training window. Scissorhands [11] maintains pivotal tokens and recent tokens based on the persistence of importance hypothesis. H2O [12] utilizes accumulated attention score to maintain heavy hitters and recent tokens. TOVA [13] removes tokens with the lowest current attention score from the fixed cache at each decoding step. RoCo [14] retains tokens in the fixed cache based on high mean cumulative attention scores and top r standard deviations. Aforementioned methods consistently operate on a fixed cache, ignoring that the number of tokens playing an important role may vary across different attention heads and layers.", + "pre_questions": [], + "main_content": "Introduction Large language models (LLMs) have demonstrated impressive proficiency in a wide range of natural language processing tasks such as question answering, summarization and multi-turn dialogues [1\u20133]. Considering substantial cost of deploying LLMs introduced by tremendous model size and quadratic cost of attention layer, many works focused on model compression and memory-efficient attention techniques [4\u20137]. However, the size of KV cache, which stores previous tokens\u2019 key and value states to avoid re-computation, scaling linearly with sequence length during generation, also incurs significant overhead. For instance, even a 7 billion-parameter model with batch size of 128 and sequence length of 4096 results in 256GB of KV cache, far exceeds memory consumed by model itself which is only 14GB. A natural idea is to discard some less informative KV cache to reduce memory consumption. The challenge lies in finding a balance between discarding as much as possible while still maintaining model performance. Despite multi-query attention [8] and grouped-query attention [9] can reduce the size of KV cache by reducing attention heads, it needs re-training to recover performance of original model. Recent works \u2217Corresponding Author Preprint. In progress. arXiv:2404.15949v1 [cs.CL] 24 Apr 2024 [10\u201314] have investigated implementing KV cache using specific eviction policy, that determines which key-value states should be evicted from KV cache. These methods aim to compress KV cache to a pre-defined budget size, thereby reducing memory and computational overhead. However, they save same number of key-value pairs for all attention heads and layers, ignoring that the number of keys playing an important role may vary across different attention heads and layers [15]. (a) (b) Figure 1: Attention sparsity of LLaMA2-7B. (a) Layer-wise attention sparsity. (b) Head-wise attention sparsity of layer 0 and layer 1. Intuitively, if important information in the KV cache exceeds the predetermined budget size, the performance of the model is likely to decline as it unavoidably evicts some crucial information. Our preliminary exploration also reveals that different attention layers and heads show different sparsities as shown in Figure 1. First, we observe that bottom layers of the model are relatively dense2, while the remaining attention layers exhibit significant sparsity. Second, even within the same layer, different heads can exhibit obvious differences in sparsity levels. These properties suggest that we need to treat different layers and heads differently, rather than using the same budget size for all of them. In addition, we prove that completely similar queries have similar concerns about keys, and observe that recent query vectors are quite similar on LLaMA2 series models so current query can directly use recent query attention messages during generation. Based on the above insights, we first define the generation process of LLMs with a budget-unrestricted KV cache in Section 3. Then we propose Cache Optimization with Recent Message (CORM), a framework that exploits recent query attention information for KV cache optimization and token generation of LLMs. Specifically, \u2022 In Section 3, we explore the similarity between query vectors of all tokens within same sequence, revealing that recent query vectors are highly similar, which implies that (i) keys that are important for recent queries might be also important for the current query; and (ii) removing key-value pairs that appear to be less informative for recent queries can greatly preserve the performance of the model. \u2022 In Section 4, we present a simple method which dynamically evicts minor key-value pairs determined by recent tokens\u2019 attention information. We conduct extensive experiments on LLaMA2-7B-Chat, considering its popularity and wide usage, to evaluate CORM across 6 tasks from LongBench [16] containing question answering, summarization, code completion, etc. Experiments show that even without explicitly setting a budget size, our method is still possible to achieve a high compression rate. Our method achieves better performance compared to StreamingLLM [10], Scissorhands [11] and H2O [12] with over 70% KV cache reduction rate and can even come close to fully restoring the performance of the model. We first demonstrate the existence of attention sparsity in LLMs in Section 3.1, then discuss the phenomenon that similar queries have similar attention concerns for keys in Section 3.2. In Section 3.3, we show an intriguing observation that current query is most similar to recent queries. 3.1 Attention sparsity in LLMs We first explore the sparsity in attention layers of LLMs, which provides an effective guarantee for us to reduce KV cache size. Specifically, we use proportion of important keys to represent attention sparsity. Let qt \u2208R1\u00d7d denote the query state vector at step t, ki \u2208R1\u00d7d denote the key state vector at step i (1 \u2264i \u2264t), where d is hidden dimension (for the sake of simplicity, we only consider a single head here). The normalized attention score of qt for ki is computed as: \u03b1t,i = exp(qtkT i / \u221a \ufffdt exp(qkT / exp(qtkT i / \u221a d) \ufffdt j=1 exp(qtkT j / \u221a ne a key ki is consid d) . (2) \ufffd Definition 3.1 (Important Key) We define a key ki is considered important in step t, if and only if \u03b1t,i \u22651 t , otherwise it is considered minor. ion 3.1 (Important Key) We defin 1 t , otherwise it is considered minor. 3 We conduct zero-shot inference with LLaMA2-7B model on the test set of PG-19 [18]. We plot the layer-wise and head-wise sparsity within attention blocks, the results are presented in Figure 1. It reveals that bottom layers are relatively dense, while other layers are highly sparse with over 90% sparsity. This makes it possible to do attention computation on only small part of KV cache during generation. 3.2 Similar queries have similar concerns for keys The previous section reveals the existence of attention sparsity in LLMs, which provides an opportunity to reduce KV cache size while maintaining performance. In this section we give a theoretical analysis that similar queries have similar concerns for keys for eviction policy design. Consider the i-th and j-th query state vectors qi and qj in a sequence of token length T (i < j \u2264T). Their cosine similarity can be computed as: cosine_similarity(qi, qj) = qiqT j \u2225qi\u2225\u00b7 \u2225qj\u2225. (3) Consider all key states k1, k2, ..., ki\u22121 before i-th key. Assume that cosine_similarity(qi, qj) = 1, then qi = m \u00b7 qj with m \u2208R+. The attention weight3 of qi to the previous i \u22121 keys can be represented as: attention_weight = 1 \u221a d (qikT 1 , qikT 2 , ..., qikT i\u22121) = m \u221a d \u00b7 (qjkT 1 , qjkT 2 , ..., qjkT i\u22121). (4) Note that m is a positive number that does not affect the relative order of the attention weights. For example, for qi, if qikT 1 > qikT 2 , there must be qjkT 1 > qjkT 2 for qj. This means if a key is important to qi, it is also important to qj, though the degree of importance may vary due to the softmax function. Figure 2: Similar queries have similar concerns for keys. We plot the attention map from two different layers in a sentence. We discretize the attention score and those important keys are shown in bright green. Each attention map has two red borders, the bottom border shows important keys that current query actually focuses on, while another border shows important keys that the most similar query focuses on. Although it\u2019s nearly impossible that cosine_similarity(qi, qj) = 1 in real situation, we can make the hypothesis that two similar queries may have similar concerns for keys. To validate this hypothesis, we provide two attention maps of a sentence randomly drawn from PG-19 using LLaMA2-7B, as 3attention weight is unnormalized attention score 4 shown in Figure 2. Important keys are marked with bright green, more plots are available in Appendix A.1. We observe that the hypothesis is true, and similar queries exhibit similar concerns for important keys. At the same time, important keys only account for a small proportion especially in deeper attention layers, which is consistent with the finding that deeper layers are sparser in previous section. 3.3 Similarity exploration of query vectors We have validated two similar queries have similar concerns for keys in Section 3.2, we also need to validate that at each step we can find a previous query state that is similar enough to current query state in same layer and same head. To check this, we visualize cosine similarity of query vectors within same sequence as shown in Figure 3, more plots are available in Appendix A.2. We observe an intriguing phenomenon that many images show clear oblique color segmentation, with the top oblique block closest to dark red which means current query is most similar to recent queries. Figure 3: Visualization of query vectors\u2019 cosine similarity over one sentence with a length of 1024. The i-th row of the map represents cosine similarity of the i-th query to all previous queries. The plot reveals that in most cases current query is most similar to recent queries. Through above observations, we see an opportunity to design a KV cache eviction policy based on query similarity that preserves the LLM generation performance. 4 Cache Optimization with Recent Message In this section, we present CORM, a method reduces the KV cache memory based on recent query attention information without any fine-tuning process. In Section 4.1, we derive that current query can directly use recent query attention messages during generation. In Section 4.2, we present CORM eviction policy and describe how it works during generation. 4.1 Generate based on recent query attention messages Consider observations in Section 3, intuitively, we can directly store all queries and their attention information for future reference. At each generation step, use current query to find the most similar one from previous queries, and use its saved attention information to calculate solely on important keys. However, this approach incurs a significant cost. First, storing all queries results in a substantial increase in memory overhead. Second, the requirement of performing similarity calculations at each step adds to the computational overhead. Since in most cases current query is most similar to recent queries as described in Section 3.3, we can just use recent query attention messages. And from Figure 2 we can also observe that only a small proportion of keys are considered important by recent queries. Therefore even if we save all the keys that are considered important in previous steps, we can save a lot of memory. 4.2 Eviction algorithm via recent message We have shown recent query attention information is enough for cache optimization in Section 4.1. In the following, we formally define this algorithm and introduce how to integrate it into LLM generation directly. Definition 4.1 (Long-term Minor Key) A key ki is considered as long-term minor key only if it is considered minor in all recent r steps (from t \u2212r + 1 to t). 5 Approach CORM will have a recent window of size w to record the information of recent w queries, and will always keep recent r keys unremoved to prevent them from being discarded prematurely due to insufficient observations. During generation, ki, vi will be discarded once ki is regarded as long-term minor key. For better explanation we present pytorch code4 of main algorithm in Algorithm 1. Intuitively, when w is larger, more keys and values will be saved, the compression rate will be smaller and performance will be better; Conversely, when w is smaller, fewer keys and values will be saved, the compression rate will be larger and performance will be worse. So there\u2019s a tradeoff between performance and compress rate. Memory Overhead Analysis In order to reduce memory overhead of KV cache, an extra memory overhead is introduced by recent information cache. We need to store recent query messages which increase memory overhead. However, these overheads are far less than compressed KV cache, one can use a small portion of memory to avoid maintaining full KV cache memory without obvious performance degradation. On the other hand, the compression rate will increase as the sequence length increases as shown in Figure 4, resulting in a lower memory overhead for this component in comparison. Algorithm 1 Single-head KV cache eviction with CORM (unbatched) def corm_eviction(keys, values, message, attn_score, w, r, t): \"\"\" Args: keys: previous key states, a tensor with shape [l, d] values: previous value states, a tensor with shape [l, d] message: attention message, a tensor with shape of [m, l-1] attn_score: current steps attention score, a tensor with shape of [1, l] w: window size, a scalar r: recent size, a scalar t: current step, a scalar Returns: updated_keys: updated keys updated_values: updated values updated_message: updated message \"\"\" m = message.shape[0] # update attention message message = torch.cat([message, torch.zeros(m, 1)], dim=1) \u25b7pad to [m, l] cur_message = attn_score >= 1 / t message = torch.cat([message, cur_message], dim=1)[-w:, :] if message.shape[0] < w: return keys, values, message else: # determine the key-value pairs that necessitate discarding decision = message.any(dim=0) decision[-r:] = True \u25b7always keep recent r tokens unremoved indices = torch.nonzero(decision).squeeze() keys = keys[indices, :] values = values[indices, :] return keys, values, message 4For the sake of brevity, the code snippet only demonstrates single-head eviction operation, while in the actual implementation, it will be performed on each head at every layer. 6 5 Empirical Evaluation In this section, we present the results that demonstrate CORM can reduce up to 70% of the memory footprint of KV Cache without accuracy degradation on LLaMA2-7B-Chat. Dataset To broadly validate feasibility of our method on real-world use cases, we choose LongBench [16] as our evaluation benchmark, which contains a wide range of long-text tasks such as question answering [19\u201324], summarization [25\u201328], few-shot learning [29\u201332], synthetic task and code completion [33, 34]. Here we do not consider short text tasks, because even full cache doesn\u2019t have any bottlenecks. Models Since sequence length is the main factor in the continuous growth of KV Cache, we employ LLaMA2-7B-Chat [2] for 4K test considering its wide usage. Baselines Since CORM reduces KV cache without need for training, we consider several similar approaches as our baselines: StreamLLM [10], Scissorhands [11] and H2O [12]. In addition, the full KV cache is also considered as strong baseline to measure the performance loss of other methods. Setting All baselines can be regarded as fixed budget size KV cache compression, however CORM is a dynamic compression method. Since we find that CORM has similar compression rates for various task texts with the same sequence length. For fair comparison, we plot the relationship between model compression rate and sequence length using texts randomly sampled from PG19 [18] as shown in Figure 4. Figure 4: Relationship between compression ratio and sequence length. Plots show that compression rate with CORM \"256+256\" and budget=1024 are close for LLaMA2-7B-Chat. Main Results We evaluate LLaMA2-7B-Chat for 4K length text. Results are summarized in Table 1 & 2 for LLaMA2-7B-Chat. The following observations can be drawn: (1) CORM consistently outperforms previous methods at the same compression rate across a wide range of tasks. (2) Meanwhile, with over 70% KV cache reduction, CORM achieves comparable performance as the model with full KV cache and even surpass it on some tasks, we speculate it\u2019s because there\u2019s some noise in full KV cache that affects model output and our method can eliminate this noise to a certain extent by discarding some KV cache. 5.1 Budget unnecessity: is unbudgeted better? We primarily focus on the effectiveness of not setting a budget versus setting a fixed budget. Note that since we use same window size and recent size as Scissorhands in the experiment, it can be regarded a natural ablation experiment. And Table 1 & 2 have shown that, at the similar compression rate, CORM is much better than Scissorhands in most tasks, and performance of other tasks is close. This verifies that different transformer layers and heads should be treated differently rather than setting a same fixed budget size. 7 Table 1: Results (%) on single-doc QA, multi-doc QA and summarization tasks. \"Full\" refers to LLaMA2-7B-Chat utilizing full KV Cache, \"StreamLLM\" is configured with 4+1020, \"Scissorhands\" is configured with 768+256 where window size=256, \"H2O\" is configured with 768+256, \"CORM\" is configured with 256+256 for fair comparison. For the sake of brevity we use ID to denote dataset here, mapping from ID to dataset can be found in Appendix B . Method Single-Doc QA Multi-Doc QA Summarization 1-1 1-2 1-3 1-4 2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 Full 19.0 22.1 36.7 11.8 27.8 31.5 8.3 6.8 26.8 20.7 26.2 0.2 StreamLLM 13.2 15.4 27.2 6.5 24.2 25.4 5.3 4.4 21.6 19.8 24.4 0.1 Scissorhands 16.6 18.7 32.4 9.9 26.3 32.1 8.9 5.7 22.1 20.7 25.4 0.2 H2O 17.9 19.5 34.9 11.5 27.5 29.7 7.5 7.1 24.5 21.0 25.8 0.2 CORM 18.9 22.2 38.6 12.0 27.6 31.6 8.4 7.1 26.4 21.0 25.8 0.2 Table 2: Results (%) on few-shot learning, synthetic, and code tasks. \"Overall\" is computed by the macro-average over major task categories. This is computed on English (EN) tasks, Chinese (ZH) tasks, and all (All) tasks, code tasks are included in both languages. Method Few-shot Learning Synthetic Code Overall 4-1 4-2 4-3 4-4 5-1 5-2 5-3 6-1 6-2 EN ZN All Full 64.0 83.3 41.4 17.3 2.9 7.8 10.0 58.3 52.2 32.8 16.9 28.9 StreamLLM 61.0 82.9 39.1 14.5 1.8 4.7 6.5 57.6 50.0 29.5 14.3 25.7 Scissorhands 52.5 83.6 40.7 17.0 3.1 6.5 7.7 56.8 52.1 31.0 15.8 27.2 H2O 63.0 81.5 39.9 17.0 2.8 7.0 7.3 57.8 52.3 31.8 16.4 28.0 CORM 64.0 83.5 41.3 17.3 2.9 9.0 9.1 58.3 52.0 32.9 16.8 28.9 6 Conclusion In this paper, we investigate a critical memory bottleneck in LLM deployment, KV cache. Inspired by similar queries have similar concerns for keys and recent queries are similar enough, we propose CORM, an unbudgeted KV cache eviction policy for significantly reducing its memory footprint, by reusing recent query attention messages. Through extensive evaluations, we demonstrate that CORM can reduce the inference memory usage of the KV cache by up to 70% without noticeable performance degradation across a variety of tasks. 8" + }, + { + "url": "http://arxiv.org/abs/2309.17453v4", + "title": "Efficient Streaming Language Models with Attention Sinks", + "abstract": "Deploying Large Language Models (LLMs) in streaming applications such as\nmulti-round dialogue, where long interactions are expected, is urgently needed\nbut poses two major challenges. Firstly, during the decoding stage, caching\nprevious tokens' Key and Value states (KV) consumes extensive memory. Secondly,\npopular LLMs cannot generalize to longer texts than the training sequence\nlength. Window attention, where only the most recent KVs are cached, is a\nnatural approach -- but we show that it fails when the text length surpasses\nthe cache size. We observe an interesting phenomenon, namely attention sink,\nthat keeping the KV of initial tokens will largely recover the performance of\nwindow attention. In this paper, we first demonstrate that the emergence of\nattention sink is due to the strong attention scores towards initial tokens as\na \"sink\" even if they are not semantically important. Based on the above\nanalysis, we introduce StreamingLLM, an efficient framework that enables LLMs\ntrained with a finite length attention window to generalize to infinite\nsequence lengths without any fine-tuning. We show that StreamingLLM can enable\nLlama-2, MPT, Falcon, and Pythia to perform stable and efficient language\nmodeling with up to 4 million tokens and more. In addition, we discover that\nadding a placeholder token as a dedicated attention sink during pre-training\ncan further improve streaming deployment. In streaming settings, StreamingLLM\noutperforms the sliding window recomputation baseline by up to 22.2x speedup.\nCode and datasets are provided at https://github.com/mit-han-lab/streaming-llm.", + "authors": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis", + "published": "2023-09-29", + "updated": "2024-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.17118v2", + "title": "Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time", + "abstract": "Large language models(LLMs) have sparked a new wave of exciting AI\napplications. Hosting these models at scale requires significant memory\nresources. One crucial memory bottleneck for the deployment stems from the\ncontext window. It is commonly recognized that model weights are memory hungry;\nhowever, the size of key-value embedding stored during the generation process\n(KV cache) can easily surpass the model size. The enormous size of the KV cache\nputs constraints on the inference batch size, which is crucial for high\nthroughput inference workload. Inspired by an interesting observation of the\nattention scores, we hypothesize the persistence of importance: only pivotal\ntokens, which had a substantial influence at one step, will significantly\ninfluence future generations. Based on our empirical verification and\ntheoretical analysis around this hypothesis, we propose Scissorhands, a system\nthat maintains the memory usage of the KV cache at a fixed budget without\nfinetuning the model. In essence, Scissorhands manages the KV cache by storing\nthe pivotal tokens with a higher probability. We validate that Scissorhands\nreduces the inference memory usage of the KV cache by up to 5X without\ncompromising model quality. We further demonstrate that Scissorhands can be\ncombined with 4-bit quantization, traditionally used to compress model weights,\nto achieve up to 20X compression.", + "authors": "Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, Anshumali Shrivastava", + "published": "2023-05-26", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1706.03762v7", + "title": "Attention Is All You Need", + "abstract": "The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training data.", + "authors": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin", + "published": "2017-06-12", + "updated": "2023-08-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.06262v2", + "title": "On the Efficacy of Eviction Policy for Key-Value Constrained Generative Language Model Inference", + "abstract": "Despite the recent success associated with Large Language Models (LLMs), they\nare notably cost-prohibitive to deploy in resource-constrained environments due\nto their excessive memory and computational demands. In addition to model\nparameters, the key-value cache is also stored in GPU memory, growing linearly\nwith batch size and sequence length. As a remedy, recent works have proposed\nvarious eviction policies for maintaining the overhead of key-value cache under\na given budget. This paper embarks on the efficacy of existing eviction\npolicies in terms of importance score calculation and eviction scope\nconstruction. We identify the deficiency of prior policies in these two aspects\nand introduce RoCo, a robust cache omission policy based on temporal attention\nscores and robustness measures. Extensive experimentation spanning prefilling\nand auto-regressive decoding stages validates the superiority of RoCo. Finally,\nwe release EasyKV, a versatile software package dedicated to user-friendly\nkey-value constrained generative inference. Code available at\nhttps://github.com/DRSY/EasyKV.", + "authors": "Siyu Ren, Kenny Q. Zhu", + "published": "2024-02-09", + "updated": "2024-02-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.02150v1", + "title": "Fast Transformer Decoding: One Write-Head is All You Need", + "abstract": "Multi-head attention layers, as used in the Transformer neural sequence\nmodel, are a powerful alternative to RNNs for moving information across and\nbetween sequences. While training these layers is generally fast and simple,\ndue to parallelizability across the length of the sequence, incremental\ninference (where such paralleization is impossible) is often slow, due to the\nmemory-bandwidth cost of repeatedly loading the large \"keys\" and \"values\"\ntensors. We propose a variant called multi-query attention, where the keys and\nvalues are shared across all of the different attention \"heads\", greatly\nreducing the size of these tensors and hence the memory bandwidth requirements\nof incremental decoding. We verify experimentally that the resulting models can\nindeed be much faster to decode, and incur only minor quality degradation from\nthe baseline.", + "authors": "Noam Shazeer", + "published": "2019-11-06", + "updated": "2019-11-06", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.14048v3", + "title": "H$_2$O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models", + "abstract": "Large Language Models (LLMs), despite their recent impressive\naccomplishments, are notably cost-prohibitive to deploy, particularly for\napplications involving long-content generation, such as dialogue systems and\nstory writing. Often, a large amount of transient state information, referred\nto as the KV cache, is stored in GPU memory in addition to model parameters,\nscaling linearly with the sequence length and batch size. In this paper, we\nintroduce a novel approach for implementing the KV cache which significantly\nreduces its memory footprint. Our approach is based on the noteworthy\nobservation that a small portion of tokens contributes most of the value when\ncomputing attention scores. We call these tokens Heavy Hitters (H$_2$). Through\na comprehensive investigation, we find that (i) the emergence of H$_2$ is\nnatural and strongly correlates with the frequent co-occurrence of tokens in\nthe text, and (ii) removing them results in significant performance\ndegradation. Based on these insights, we propose Heavy Hitter Oracle (H$_2$O),\na KV cache eviction policy that dynamically retains a balance of recent and\nH$_2$ tokens. We formulate the KV cache eviction as a dynamic submodular\nproblem and prove (under mild assumptions) a theoretical guarantee for our\nnovel eviction algorithm which could help guide future work. We validate the\naccuracy of our algorithm with OPT, LLaMA, and GPT-NeoX across a wide range of\ntasks. Our implementation of H$_2$O with 20% heavy hitters improves the\nthroughput over three leading inference systems DeepSpeed Zero-Inference,\nHugging Face Accelerate, and FlexGen by up to 29$\\times$, 29$\\times$, and\n3$\\times$ on OPT-6.7B and OPT-30B. With the same batch size, H2O can reduce the\nlatency by up to 1.9$\\times$. The code is available at\nhttps://github.com/FMInference/H2O.", + "authors": "Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher R\u00e9, Clark Barrett, Zhangyang Wang, Beidi Chen", + "published": "2023-06-24", + "updated": "2023-12-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.13245v3", + "title": "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints", + "abstract": "Multi-query attention (MQA), which only uses a single key-value head,\ndrastically speeds up decoder inference. However, MQA can lead to quality\ndegradation, and moreover it may not be desirable to train a separate model\njust for faster inference. We (1) propose a recipe for uptraining existing\nmulti-head language model checkpoints into models with MQA using 5% of original\npre-training compute, and (2) introduce grouped-query attention (GQA), a\ngeneralization of multi-query attention which uses an intermediate (more than\none, less than number of query heads) number of key-value heads. We show that\nuptrained GQA achieves quality close to multi-head attention with comparable\nspeed to MQA.", + "authors": "Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebr\u00f3n, Sumit Sanghai", + "published": "2023-05-22", + "updated": "2023-12-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.06104v1", + "title": "Transformers are Multi-State RNNs", + "abstract": "Transformers are considered conceptually different compared to the previous\ngeneration of state-of-the-art NLP models - recurrent neural networks (RNNs).\nIn this work, we demonstrate that decoder-only transformers can in fact be\nconceptualized as infinite multi-state RNNs - an RNN variant with unlimited\nhidden state size. We further show that pretrained transformers can be\nconverted into $\\textit{finite}$ multi-state RNNs by fixing the size of their\nhidden state. We observe that several existing transformers cache compression\ntechniques can be framed as such conversion policies, and introduce a novel\npolicy, TOVA, which is simpler compared to these policies. Our experiments with\nseveral long range tasks indicate that TOVA outperforms all other baseline\npolicies, while being nearly on par with the full (infinite) model, and using\nin some cases only $\\frac{1}{8}$ of the original cache size. Our results\nindicate that transformer decoder LLMs often behave in practice as RNNs. They\nalso lay out the option of mitigating one of their most painful computational\nbottlenecks - the size of their cache memory. We publicly release our code at\nhttps://github.com/schwartz-lab-NLP/TOVA.", + "authors": "Matanel Oren, Michael Hassid, Yossi Adi, Roy Schwartz", + "published": "2024-01-11", + "updated": "2024-01-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.12833v1", + "title": "How Far Can We Go with Practical Function-Level Program Repair?", + "abstract": "Recently, multiple Automated Program Repair (APR) techniques based on Large\nLanguage Models (LLMs) have been proposed to enhance the repair performance.\nWhile these techniques mainly focus on the single-line or hunk-level repair,\nthey face significant challenges in real-world application due to the limited\nrepair task scope and costly statement-level fault localization. However, the\nmore practical function-level APR, which broadens the scope of APR task to fix\nentire buggy functions and requires only cost-efficient function-level fault\nlocalization, remains underexplored. In this paper, we conduct the first\ncomprehensive study of LLM-based function-level APR including investigating the\neffect of the few-shot learning mechanism and the auxiliary repair-relevant\ninformation. Specifically, we adopt six widely-studied LLMs and construct a\nbenchmark in both the Defects4J 1.2 and 2.0 datasets. Our study demonstrates\nthat LLMs with zero-shot learning are already powerful function-level APR\ntechniques, while applying the few-shot learning mechanism leads to disparate\nrepair performance. Moreover, we find that directly applying the auxiliary\nrepair-relevant information to LLMs significantly increases function-level\nrepair performance. Inspired by our findings, we propose an LLM-based\nfunction-level APR technique, namely SRepair, which adopts a dual-LLM framework\nto leverage the power of the auxiliary repair-relevant information for\nadvancing the repair performance. The evaluation results demonstrate that\nSRepair can correctly fix 300 single-function bugs in the Defects4J dataset,\nlargely surpassing all previous APR techniques by at least 85%, without the\nneed for the costly statement-level fault location information. Furthermore,\nSRepair successfully fixes 32 multi-function bugs in the Defects4J dataset,\nwhich is the first time achieved by any APR technique ever to our best\nknowledge.", + "authors": "Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, Yuqun Zhang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "2.1 Large Language Model Large Language Models (LLMs) contain billions of parameters and are trained on petabyte-scale datasets. They are typically built based on the Transformer architecture [70] comprising an encoder for input processing and a decoder for output token generation. In particular, the decoder-only models as shown in Figure 1a have demonstrated superior text comprehension [61] and code generation capabilities [23]. Thus, they have garnered significant interest of researchers and been widely applied to various downstream tasks in software engineering, e.g., test case generation [42, 47], vulnerability detection [31, 32], and program repair [33, 37, 77]. These models when integrating domain-specific knowledge for specific tasks, are often fine-tuned [67] for further improving their performance. For instance, CodeLlama [63] is fine-tuned based on Llama 2 [69] for generating and discussing code, and Magicoder [7] is fine-tuned with OSS-Instruct [73] to enhance the code generation performance. While fine-tuning requires significant computational resources and specialized datasets [29], simpler prompting strategies like few-shot learning [26] and Chain of Thought (CoT) [72] How Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA // Buggy Function public double getNumericalMean() {\u2026} GPT-3.5 Decoder for(int cnt = 0; Decoder-only for(int cnt = 0; cnt OSS-Instruct RLHF CodeLlama Magicoder Codex Code Training // Buggy Function int binarySearch(\u2026) {Buggy Code} // Fixed Function int binarySearch(\u2026) {Fixed Code} Few-shot learning on APR task // Buggy Function Same Buggy Project: Historical Buggy Code Example // Fixed Function Same Buggy Project: Historical Fixed Code Example (a) (b) Figure 1: Decoder-only models and few-shot learning on APR which have also been shown effective are much less costly and thus have been increasingly adopted [20, 84, 88]. Figure 1b illustrates how the few-shot learning mechanism is applied for APR. Firstly, the APR task-related examples such as the buggy and fixed code pair of the function binarySearch are incorporated into the prompt. Note that the example selection varies among different techniques, e.g., manually crafting examples like binarySearch [76] and choosing examples of historical bug fixes within the same buggy project [76, 79]. Next, the target buggy function to be fixed, e.g., getNumbericalMean() in the Math-2 bug [14], is also added to the prompt. At last, the resulting prompt is fed to the model to generate patches for the target buggy function. To summarize, the purpose of the few-shot learning mechanism is to enable the model to learn how to handle specific tasks through the examples. However, while multiple LLM-based APR techniques have already incorporated the few-shot learning mechanism [40, 76, 79], its impacts and characteristics remain unexplored. 2.2 Automated Program Repair Automated Program Repair (APR) techniques [33, 39, 40, 46, 56, 74\u2013 77, 81], designed to aid developers in fixing bugs by automatically generating patches, typically follow the Generate-and-Validate (G&V) paradigm [54]. In particular, an APR process refers to locating program faults, generating patches for the buggy locations, and validating such patches against a test suite to determine their plausibility (i.e., whether they could pass all tests). Eventually, these resulting plausible patches are manually reviewed to select the correct fix for the target fault. Notably, the trigger tests in the patch-validating test suite are manually created by developers. During the execution of trigger tests, the unit testing framework, e.g., JUnit [5], can be used to provide the corresponding error messages. Among the APR techniques, the learning-based techniques [39, 49, 56, 74, 77, 79, 81] that utilize deep learning techniques have recently achieved remarkable performance. Specifically, many such techniques widely adopt the Neural Machine Translation (NMT) [68] techniques which convert APR into a translation task to transform buggy code into correct code. They typically leverage the power of the NMT models through training on extensive datasets containing millions of buggy and fixed code pairs. However, such techniques are highly costly when building well-constructed datasets of buggy and patch code pairs [77] and specific context representation for the NMT models [39]. More recently, Large Language Models (LLMs) have become increasingly adopted in various downstream software tasks including APR. In particular, directly applying models like Codex can already outperform all previous APR techniques [76]. Meanwhile, multiple LLM-based APR techniques have been proposed to further enhance the repair performance. For instance, AlphaRepair [77] applies the pre-trained CodeBERT model with the \u201ccloze-style\u201d APR, i.e., removing the buggy code tokens and applying the LLM to generate the correct ones. Similar as AlphaRepair in adopting the cloze-style repair paradigm, Repilot [74] focuses on synthesizing compilable patches, utilizing the Language Server Protocol to prune infeasible tokens and proactively complete tokens as suggested by the LLM. FitRepair [75] combines LLMs with domainspecific fine-tuning and prompting strategies, fully automating the plastic surgery hypothesis, i.e., the code ingredients to fix bugs usually already exist within the same project. ChatRepair [79] utilizes the conversational repair mechanism based on GPT-3.5 and successfully fixes 162 out of 337 bugs in the Defects4J dataset with the assistance of the rich information from original bug-exposing tests. Fan et al. [33] conduct a study to investigate whether APR techniques can correct program errors generated by LLMs, particularly in complex tasks like the LeetCode contests. Another study [76] employs the few-shot learning mechanism and recognizes the ineffectiveness of simply feeding LLMs with only buggy functions as they are not pre-trained for APR. To address this, they create a prompt containing two pairs of buggy and fixed code examples: one manually crafted, and another from the same project or dataset. Then they include the buggy function to be fixed in this prompt, thus activating the function-level APR by providing such a prompt to LLMs. However, their claim that LLMs cannot be directly applied to the function-level APR and the effectiveness of employing the few-shot learning mechanism has not been fully investigated. private boolean flipIfWarranted(\u2026) { if(1.5 * work[pingPong] < work) { int j = 4 * n 1; + int j = 4 * (n 1); for(\u2026) { \u2026}} Single-line Bug (Math-80) Single-Hunk Bug (Math-91) Single-Function Bug (Math-95) protected double getInitialDomain(\u2026) { double ret; + double ret = 1.0; double d = getDenominator\u2026(); + if (d > 2.0) { ret = d / (d 2.0); + }\u2028 \u2026} public int compareTo(\u2026) { \u2026 double nOd = doubleValue(); double dOn = object.doubleValue(); + long nOd = \u2026numerator*denominator; + long dOn = \u2026denominator*numerator; \u2026} Figure 2: Bug examples existing in a single line, hunk, or function The existing LLM-based APR techniques mainly focus on repairing single-line or single-hunk bugs [74, 75, 77, 85], as illustrated Conference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang in Figure 2. Specifically, the single-line bug Math-80 [15] is contained within a single line of the function flipIfWarranted where fixing this line alone can resolve the bug. Such a single-line bug can be fixed by APR techniques focusing on the line-level repair like AlphaRepair [77] given accurate fault locations. Meanwhile, the single-hunk bug Math-91 [16] is contained in a continuous section of code where two contiguous buggy lines are replaced in the fixed version. This kind of bugs can be fixed by the hunk-level APR techniques like Repilot [74] requiring multiple accurate statementlevel fault locations. Note that single-line and single-hunk bugs can be considered as part of single-function bugs. On the other hand, the bugs existing in multiple discontinuous sections/lines within a function and requiring simultaneous edits on them for a fix are also referred to as single-function bugs. For instance, fixing the Math-95 bug [17] requires editing three discontinuous lines simultaneously. It can be easily derived that fixing single-function bugs poses a greater challenge for APR techniques, e.g., the state-of-the-art APR techniques like ChatRepair and CodexRepair incur a performance decrease of 33% and 36.4% [79] in terms of the number of correctly fixed bugs for the function-level APR respectively. As mentioned, single-function bugs actually include single-hunk and single-line bugs as subset, i.e., enabling a larger scope of repair tasks. Moreover, locating function-level bugs tends to be cheaper than locating line-level or hunk-level bugs [21, 48, 55], thus making the function-level APR more practical. Therefore, we consider developing the function-level APR techniques rather promising and worthy being sufficiently investigated.", + "pre_questions": [], + "main_content": "INTRODUCTION Fixing software defects costs developers a significant amount of time and effort [34]. To assist developers in reducing the burden of repairing programs, Automated Program Repair (APR) techniques have been proposed to automatically generate potential patches for buggy programs. Specifically, the learning-based APR techniques which incorporate the learning power to advance the repair performance have been increasingly studied in recent years. For instance, many such techniques [30, 39, 49, 56, 81, 82, 89] utilize Neural Machine Translation (NMT) [68] such that APR is modeled as a translation task where the objective is to transform buggy code into correct code. More recently, Large Language Models (LLMs) have become largely adopted in various downstream software tasks [31, 32, 42, 47] including APR [38, 40, 62, 74, 75, 77, 79] where they have been proven to advance the repair performance [33, 38, 75\u201377], e.g., the Codex model can fix 32 more bugs than previous APR techniques in the Defects4J 1.2 dataset [76]. Meanwhile, researchers also propose multiple LLM-based APR techniques [40, 71, 74, 77\u2013 79] to further enhance the repair performance. For instance, the state-of-the-art LLM-based APR technique ChatRepair [79] employs a conversational repair mechanism and successfully fixes 162 out of 337 bugs in a crafted Defects4J dataset, causing at least 24.6% gain compared with all existing techniques. However, many LLM-based APR techniques are proposed for the single-line or hunk-level program repair by auto-completing a single line [45] or infilling a hunk of code with context [77] respectively. They typically rely on identifying statement-level program faults, i.e., given fault locations [37, 74, 75, 77, 85] or applying statement-level fault localization techniques such as Gzoltar [71]. Nevertheless, it has been widely argued that accurately identifying statement-level faults can be essentially costly, i.e., demanding fine-grained input or strong assumptions [21, 24, 52, 66], thus potentially limiting the real-world applicability of the single-line or hunk-level APR. On the other hand, the LLM-based function-level APR can be potentially more promising, i.e., applying a generative model such as an LLM to auto-regressively generate the entire patched version of the buggy function by prompting the buggy function into the LLM. To illustrate, first, the function-level APR enables a larger scope of the program repair task\u2014it involves not only the singleline and hunk-level repair tasks, but also a more complicated task which repairs multiple discontinuous lines or hunks within a function [65]. Second, identifying function-level faults tends to be more cost-efficient than identifying statement-level faults, thus rendering the function-level APR more practical in real world [44, 48, 55]. arXiv:2404.12833v1 [cs.SE] 19 Apr 2024 Conference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang While LLM-based function-level APR techniques are more promising, there lacks sufficient study and understanding of them [76], thus potentially hindering the further improved usage of LLMs for APR. Specifically, first, the existing LLM-based APR techniques exhibit significant performance loss for the function-level APR, e.g., incurring a decrease of 33% in ChatRepair and 36.4% in CodexRepair [76, 79] in terms of correctly fixed bugs. Second, the rationale of how such techniques can be affected has not been fully investigated. More specifically, the effectiveness of certain commonly-used mechanism such as the few-shot learning [40, 76, 79], i.e., prompting buggy code and fixed code pair examples that illustrate the function-level APR task and provide repair context for advancing the learning power of models, remains inadequately validated. Additionally, the potential of incorporating the auxiliary repair-relevant information, such as bug reports and trigger tests, remains underexplored [33, 62, 79]. Thus, there is an urgent need to extensively study the LLM-based function-level APR to further enhance the repair performance. In this paper, we conduct the first comprehensive study on the function-level LLM-based APR including investigating the effect of the few-shot learning and the auxiliary repair-relevant information. Specifically, we adopt six widely-studied LLMs including the state-of-the-art LLMs such as Codex-edit and GPT-3.5Turbo [33, 79, 85, 86] as the study subjects. We also construct a benchmark containing 522 single-function bugs, i.e., the bugs existing within one single function, in the Defects4J dataset. Typically, we build our repair prompt containing the buggy function along with (1) the buggy code and fixed code pair examples to utilize the few-shot learning mechanism; and (2) the auxiliary repair-relevant information such as trigger tests for the studied LLMs respectively. In this way, the entire patched functions can be auto-generated and then validated with the Defects4J test suite to derive the plausible patches (which pass all the tests). Our evaluation results demonstrate that incorporating the few-shot learning mechanism in the function-level APR actually causes significantly disparate and even negative impacts on the average number of plausible fixes compared with applying the default LLMs only, i.e., from an increase of 10% to a decrease of 49.7% among all studied LLMs. Surprisingly, we find that directly applying trigger tests or error messages for prompting can significantly enhance the repair performance, e.g., 26.7% and 26.1% improvement in terms of the average number of plausible fixes respectively. On the other hand, while statementlevel fault location information is shown powerful essential to many APR techniques, only adopting the easier-to-obtain auxiliary repairrelevant information including trigger tests, error messages, and comments altogether for prompting can achieve rather close performance, i.e., causing merely 7.1% gap in terms of the number of plausible fixes. Such a result indicates the potential of replacing the costly statement-level fault location information for the LLM-based function-level APR. In our study, over 10 million patches are generated and validated, consuming more than 8,000 GPU and 100,000 CPU hours. To our best knowledge, this is the largest empirical study of LLM-based APR conducted to date. Inspired by our findings, we propose an LLM-based functionlevel APR technique, namely SRepair, which adopts a dual-LLM framework to leverage the power of the auxiliary repair-relevant information for advancing the repair performance. In particular, SRepair first adopts a repair suggestion model which employs the Chain of Thought (CoT) technique [72] to generate naturallanguage repair suggestions. More specifically, SRepair prompts the LLM with the buggy function and the auxiliary repair-relevant information (i.e., trigger tests, error messages, and comments) to identify the root causes of the bugs and generate repair suggestions in natural language accordingly. SRepair then adopts a patch generation model to auto-generate a patched function with the assistance of the repair suggestions. Our evaluation demonstrates that SRepair can correctly fix a total of 300 single-function bugs in our Defects4J dataset, largely surpassing all previous APR techniques, e.g., 1.59\u00d7 more than Repilot [74] and 85% more than ChatRepair [79], without the need for the costly statement-level fault location information. Moreover, 128 bugs out of them were not fixed by any of the baseline LLM-based APR techniques adopted in this paper. Surprisingly, SRepair is also capable of repairing 32 multi-function bugs, i.e., bugs existing across multiple functions at the same time, which, to our best knowledge, is the first time achieved by any APR technique ever. To summarize, this paper makes the following contributions: \u2022 We perform the first ever extensive study on the LLM-based function-level APR with the impact factors on its performance, paving the way for new directions in future research. We find that LLMs with zero-shot learning are already powmance, paving the way for new directions in future research. \u2022 We find that LLMs with zero-shot learning are already powerful function-level APR techniques. We also find that applying auxiliary repair-relevant information can substantially improve the repair performance for all studied LLMs. We propose a new LLM-based function-level APR technique, improve the repair performance for all studied LLMs. \u2022 We propose a new LLM-based function-level APR technique, SRepair, which can achieve remarkable repair performance by correctly fixing 300 single-function bugs, largely surpassing the SOTA techniques, i.e., outperforming ChatRepair [79] by 85% and Repilot [74] by 1.59\u00d7 in the Defects4J dataset. Moreover, SRepair successfully fixes 32 multi-function bugs, which is the first time achieved by any APR technique ever to our best knowledge. 3.1.1 Study Subjects. We utilize six distinct LLMs as our study subjects, encompassing the widely used state-of-the-art Codexedit and GPT-3.5-Turbo [33, 76, 79, 86], along with four advanced open-source code LLMs including CodeLlama 7B, 13B, and 34B (over 500k downloads on Hugging Face within one month [6]), and Magicoder 7B (over 1000 stars on GitHub [7]). Specifically, we adopt code-davinci-edit-001 [8], gpt-3.5-turbo-1106 [18], and the CodeLlama-Instruct series [6] models as the versions of as Codexedit, GPT-3.5-Turbo, and CodeLlama respectively since they have conducted the instruction fine-tuning [63] and can better follow the APR prompt instruction. We also employ the MagicoderS-CL [7] model as the version of Magicoder. Due to the page limit, the LLM configuration details are presented in our GitHub page [1]. 3.1.2 Dataset. We construct our benchmark using both the versions 1.2 and 2.0 of the Defects4J dataset [41] which is the most widely used APR dataset [39, 57, 76] with a collection of a total of 835 real-world bugs extracted from open-source Java projects, comprising both buggy and fixed versions of the source code. Notably, we bound our study scope within 522 single-function bugs including 276 single-hunk (\u201cSH\u201d) bugs and 158 single-line (\u201cSL\u201d) bugs as shown in Table 1. It should be noted that our collected singlefunction bugs already include all the single-line and single-hunk bugs studied in previous works [74\u201377, 79]. 3.1.3 Evaluation Metrics. To assess the repair performance, we follow the standard practice [35, 43, 60], to utilize the plausible Table 1: Statistics of the Dataset Dataset Project # Bugs SH Bugs SL Bugs Defects4j 1.2 Chart 16 12 9 Closure 105 59 26 Lang 42 23 13 Math 74 35 23 Mockito 24 12 7 Time 16 6 3 Defects4j 2.0 Cli 28 13 6 Codec 11 9 8 Collections 1 1 1 Compress 36 16 5 Csv 12 7 4 Gson 9 5 4 JacksonCore 13 9 5 JacksonDatabind 67 26 15 JacksonXml 5 1 1 Jsoup 53 38 27 JxPath 10 4 1 Overall 522 276 158 patches that pass all test cases as our major evaluation metric. In particular, those test cases include the trigger tests in the Defects4J dataset designed to expose bugs and relevant tests which can load the classes associated with the buggy functions. 3.2 Research Questions We investigate the following research questions for extensively studying the function-level APR along with the factors which can impact its effectiveness. \u2022 RQ1: How does the LLM-based function-level APR perform under the zero-shot and few-shot learning setups? For this RQ, we attempt to investigate the performance of the LLM-based function-level APR under the default zero-shot learning and explore the performance impact from adopting few-shot learning. RQ2: mance impact from adopting few-shot learning. \u2022 RQ2: How do different auxiliary repair-relevant information affect the LLM-based function-level repair performance? For this RQ, we attempt to study the impact from different auxiliary repairrelevant information including bug reports, trigger tests, etc., on the function-level repair performance. 3.3 Implementation We obtain the model from Hugging Face [2] and access Codexedit and GPT-3.5-Turbo through API [3] provided by OpenAI. Our default setting for patch generation uses the nucleus sampling with top \ud835\udc5d= 0.9, temperature = 0.8 and 200 samples per bug following prior works [28, 62, 76]. Patches are generated on servers with 128-core 2.6GHz AMD EPYC\u2122ROME 7H12 CPU, 512 GiB RAM and eight NVIDIA A100 80GB GPUs, running Ubuntu 20.04.6 LTS. 3.3.1 APR input prompt setup. Following prior studies [62, 76], we set the prompt for the LLM utilized in the APR task, as illustrated in Figure 3 to enable the function-level APR. Specifically, we begin with a description of the APR task as \u2018Provide a fix for the buggy function\u2019. Next, we incorporate the buggy code and fixed code pair examples from the few-shot learning mechanism or How Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA // Provide a fix for the buggy function {Buggy code and fixed code pair examples} OR {Auxiliary repair-relevant Information}\u2028 // Buggy Function public double getNumericalMean() { return (double) (getSampleSize() * getNumberOfSuccesses()) / (double) getPopulationSize();} // Fixed Function Figure 3: The input prompt for the function-level APR of the Math-2 bug Table 2: K-shot learning settings and abbreviations K Example Type Abbreviation 0 N.A. K0(Basic) 1 Crafted Example K1(CE) 1 Project Example K1(PE) 2 Crafted Example & Project Example K2(CE, PE) 2 Project Example & Project Example K2(PE, PE) the auxiliary repair-relevant information into the prompt. Subsequently, we use \u2018Buggy Function\u2019 in conjunction with the buggy code, e.g., the Math-2 buggy function getNumericalMean [14] in Figure 3, to prompt LLMs with the buggy function to be fixed. Finally, we apply \u2018Fixed Function\u2019 to guide the LLM in generating a patched function. Notably, we employ the zero-shot learning approach as the default baseline in our study, i.e., adopting no auxiliary repair-relevant information or buggy-fixed code pair examples for prompting. 3.3.2 K-shot learning setups. Table 2 presents our k-shot learning setups. Specifically, we set the zero-shot learning approach, i.e., adopting no pairs of buggy code and fixed code examples (K=0), as our basic setup denoted as K0(Basic). Moreover, we follow prior work [76] to form our buggy and fixed code pair examples via manually crafted examples (CE), i.e., binarySearch in Figure 1b and chosen historical bug fix examples within the same buggy project (PE). We thus form our k-shot learning setup variants as K1(PE) with only one chosen historical bug fix example from the same project, K1(CE) with only one manually crafted example, and K2(CE, PE) with one manually crafted example and one chosen historical bug fix example from the same project. We also form K2(PE, PE) with two chosen historical bug fix examples from the same project following the implementation of the prior work for selecting multiple examples [19]. Note that while it is possible for more setup variants, e.g., with more manually crafted examples and chosen historical bug fix examples from the same project, we generally follow the setup of prior work [76] for fair performance comparison and cost-efficient evaluations. 3.3.3 Collecting auxiliary repair-relevant information. In this study, we refer to the auxiliary repair-relevant information as the bug report and project-specific information from the target buggy project following prior works [46, 64, 79, 81, 83], as the Math-2 bug shown in Figure 4. Specifically, the bug reports are collected from the official issue links of the Defects4J [4] repository. More specifically, following prior works [27, 87], we divide a bug report into two parts, as (a) Bug Report (b) Project-specifi Comment Issue Title Issue 1021: HypergeometricDistribution.sample suffers from integer overflow Issue Description \u201cHi, I have an application which broke when ported from commons math 2.2 to 3.2. It looks like the HypergeometricDistribution.sample() method doesn't work as well as it used to with large integer values, the example code below should return a sample between 0 and 50, but usually returns -50\u2026\u201d Trigger Test Error Message public void testMath1021() { \u2026 for (\u2026) { Assert.assertTrue(0 <= sample); Assert.assertTrue(sample <= n); }} AssertionFailedError: sample=-50 at HypergeometricDistributionTest.java:297 /* For population size {@code N}, number of successes {@code m}, and sample*size {@code n}, the mean is {@code n*m/N}. */ Figure 4: The bug report and project-specific information in the Math-2 bug illustrated in Figure 4a. One is the issue title with averagely around 12 tokens to summarize the type and the cause of the bug (e.g., \u201cIssue 1021: ... suffers from integer overflow\u201d). The other is the issue description with averagely 234 tokens which provides detailed conditions, error messages, and reproduction steps, etc. For instance, the issue description in the Math-2 bug report provides a detailed description of the buggy method HypergeometricDistribution.sample() with the trigger conditions, i.e., \u201cwith large integer values\u201d. Furthermore, we automatically extract the project-specific information from the buggy project, as the Math-2 bug in Figure 4b following the prior works [62, 79, 81]. Specifically, we first build all the buggy projects and automatically extract all the trigger tests and buggy function comments. Then, for each bug, we execute the trigger tests and capture the error messages generated by the unit test framework, such as Junit [5]. Notably, among all 522 singlefunction bugs in the Defects4J dataset, only 10 miss reports and 2 miss comments. We then leave such auxiliary repair-relevant information empty in our study. For evaluating the impact from the auxiliary repair-relevant information, we form eight different setups. Specifically, for bug report-relevant information, we form three setups: BR(IT) with the issue title only, BR(ID) with the issue description only, and BR(ALL) with the whole bug report. For the project-specific information, we form four setups: PI(TT) with the trigger test only, PI(EM) with the error message only, PI(BC) with the buggy comment only, and PI(ALL) with all such information. 3.4 Result Analysis 3.4.1 RQ1: the function-level repair performance. Table 3 presents the function-level APR results in terms of the number of plausible fixes. In general, we observe that K0(Basic) achieves the overall optimal plausible fix results, i.e., 180 average plausible fixes out of our collected 522 single-function bugs, outperforming all the rest setups by at least 10.4%. Such a result indicates that LLMs themselves (with zero-shot learning) are already powerful functionlevel APR techniques. Conference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang Table 3: APR results under different few-shot learning setups Settings Codex-edit GPT-3.5-Turbo CodeLlama Magicoder Average Plausible Fixes 7B 13B 34B K0(Basic) 174 175 192 179 160 199 180 K1(CE) 103 138 180 185 176 112 149 K1(PE) 109 174 194 193 153 157 163 K2(CE, PE) 138 166 175 189 125 100 149 K2(PE, PE) 165 187 167 189 128 121 160 Finding 1: LLMs with zero-shot learning are already powerful function-level APR techniques. Interestingly, we can further observe that applying the few-shot learning mechanism leads to quite disparate plausible fix results across LLMs. For instance, compared with K0(Basic), while CodeLlama 34B shows a 10% (176 vs. 160) improvement in K1(CE), Magicoder shows a 49.7% (100 vs. 199) decline in terms of the number of plausible fixes. Finding 2: Applying the few-shot learning mechanism in the function-level APR leads to disparate plausible fix results across LLMs. K (Basic) K (CE) K (PE) K (CE, PE) K (PE, PE) 0 25 50 75 100 Plausible Test-failure Uncompilable Patch Ratio(%) 0 1 1 2 2 Figure 5: Patch status averaged across all models under different few-shot learning setups Furthermore, we present the distribution of plausible, test-failure, and uncompilable patches given the identical total number of generated patches across different LLMs under all setups in Figure 5. Note that the test-failure patches can be successfully compiled but fail one or more tests in our Defects4J dataset. Interestingly, we can find that K0(Basic) achieves the best plausible patch rate 4.3% and the lowest uncompilable patch rate 30.2% among all the k-shot setups, while applying the few-shot learning generates more uncompilable patches, i.e., ranging from 38.4% to 59.6% more than K0(Basic). Finding 3: Applying the few-shot learning mechanism may generate more uncompilable patches than the zero-shot learning mechanism. 3.4.2 RQ2: performance impact from the auxiliary repair-relevant information. Noticing that since applying zero-shot learning achieves the optimal repair performance among all the k-shot learning techniques as mentioned, we also adopt zero-shot learning in our auxiliary repair-relevant information evaluations and K0(Basic) as a baseline for a fair performance comparison. Table 4 presents the K0(Basic) BR(ALL) BR(IT) BR(ID) (a) Venn diagram public void visit(\u2026) { \u2026 if (!NodeUtil.isObjectLitKey(\u2026)){ ensureTyped(\u2026) + } else { // Object literal keys are not typeable + typeable = false; } } (b) Closure-66 bug Figure 6: The Venn diagram of plausible fixes over different setups and the bug Closure-66 which can only be fixed in K0(Basic) APR results under different auxiliary repair-relevant information setups. We observe that using bug report-relevant setups significantly enhances the repair performance of all models, i.e., the number of average plausible fixes increases from 180 in K0(Basic) to 238 in BR(IT), 270 in BR(ID), and 273 in BR(ALL). On the other hand, while BR(ALL) achieves the optimal result, Figure 6a also shows that it misses fixing 19 bugs which can be fixed in K0(Basic). More specifically, we can observe that five bugs can only be fixed in K0(Basic) other than all the rest setups, such as Closure-66 [11] in Figure 6b. We find that to fix such a bug, a simple branch condition must be added to set typeable false. In K0(Basic), four models successfully fix this bug. However, in BR(IT), BR(ID), and BR(ALL), the focus is incorrectly placed on the issue described in the bug report, leading to inappropriate code logic changes. Consequently, none of the six models are able to fix the bug. Finding 4: While applying the bug report-relevant information significantly enhances the function-level repair performance, it still misses fixing certain bugs which can be fixed by the baseline technique. We also attempt to investigate the performance impact from the project-specific information on the LLM-based function-level APR. Table 4 shows that using project-specific information setups leads to an increase for all models, i.e., the average number of plausible fixes rises from 180 in K0(Basic) to 185 in PI(BC), 227 in PI(EM), 228 in PI(TT). Notably, PI(ALL) achieves an optimal average of 254 plausible fixes, indicating the potential of leveraging as much auxiliary repair-relevant information as possible for enhancing the function-level repair performance. Interestingly, unlike adopting the bug report-relevant information, all the bugs plausibly fixed in K0(Basic) can also be fixed by adopting the project-specific information. How Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA Table 4: APR result in different auxiliary repair-relevant information settings Sources Settings Codex-edit GPT-3.5-Turbo CodeLlama Magicoder Average Plausible Fixes 7B 13B 34B N.A. K0(Basic) 174 175 192 179 160 199 180 Bug Report Information BR(IT) 265 233 234 221 221 251 238 BR(ID) 281 286 261 264 248 279 270 BR(ALL) 301 285 275 260 255 260 273 Projectspecific Information PI(BC) 186 185 187 191 169 194 185 PI(EM) 217 226 239 225 217 240 227 PI(TT) 239 247 227 221 201 235 228 PI(ALL) 264 273 249 247 236 254 254 Table 5: APR results with fault location information Sources Settings w/o \u2020FL w/ \u2020FL Improvement N.A. K0(Basic) 180 217 20.6% Bug Report Information BR(IT) 238 262 10.1% BR(ID) 270 289 7.0% BR(ALL) 273 291 6.6% Projectspecific Information PI(BC) 185 217 17.3% PI(EM) 227 257 13.2% PI(TT) 228 246 7.9% PI(ALL) 254 272 7.1% \u2020FL refers to fault location information. Finding 5: Directly adopting trigger tests, error messages, and comments from buggy projects can also effectively advance the function-level repair performance. We further evaluate the performance impact and necessity of the statement-level fault location information in the function-level APR. We utilize the ground-truth statement-level fault location information following previous work [62] by labeling the corresponding buggy line with /*bug is here*/. Specifically, the ground-truth fault locations are provided by the official Defects4J GitHub Repository [12]. To investigate the impact of the statement-level fault location information on the function-level APR, we calculate the average number of plausible fixes generated by various models across different auxiliary repair-relevant information setups. private boolean inferTemplatedTypesForCall(\u2026) { \u2026 Map inferred = /* bug is here */ inferTemplateTypesFromParameters(fnType, n); /* bug is here */ + Map inferred = Maps.filterKeys(\u2026); TemplateTypeReplacer replacer = \u2026 return replacer.madeChanges; } Figure 7: Statement-level fault location information misleads LLM, preventing the repair of the Closure-112 bug From Table 5, we can observe that while applying the statementlevel fault location information enhances the repair performance, the extent of this improvement can be potentially compromised with the token number increase of the auxiliary repair-relevant information. For instance, while K0(Basic)\ud835\udc39\ud835\udc3fachieves a performance improvement of 20.6% compared to K0(Basic), such an improvement shrinks to 6.6% comparing BR(ALL)\ud835\udc39\ud835\udc3fto BR(ALL) with averagely 246 tokens and 7.1% comparing PI(ALL)\ud835\udc39\ud835\udc3fto PI(ALL) with averagely 396 tokens. Moreover, we find that 14 bugs that are originally fixable without fault location information cannot be plausibly fixed across all setups and models when using fault location information. For instance in the Closure-112 bug [10] shown in Figure 6b which demands multiple edits, a correct fix is achieved if the model reads the entire method, thus comprehending the necessity of adding Maps.filterKeys to check if each key (of the TemplateType type) exists in the key collection. However, with the fault location information, the attention of the model becomes disturbed, consequently over-focusing on the Map inferred code block and making extensive but ineffective modifications. Finding 6: The statement-level fault location information effectively enhances the repair performance. As the token number of auxiliary repair-relevant information increases, the extent of the improvement can be potentially compromised. 4 DISCUSSION 4.1 Bug report While the bug reports associated with carefully evaluated projects like Defects4J are generally of high quality where their effectiveness can be shown in our evaluation results, they nonetheless include instances of inaccuracies [46]. Specifically, real-world bug reporting is filled with a significant volume of reports that are invalid, irreproducible, incomplete, or outright misleading [22, 25, 51]. Moreover, the process of generating bug reports is manual and labor-intensive, in contrast to the APR techniques seeking to rectify software bugs autonomously, eliminating the need for human intervention [36]. Consequently, relying on bug reports for providing auxiliary repairrelevant information to advance the function-level APR may be inappropriate and impractical, especially when dealing with unknown faults. On the contrary, trigger tests [53, 80, 81] precisely identify the root cause of faults. Error messages [59, 81] can be automatically obtained from test outputs and reveal the fault-triggering boundary conditions. Comments provide function descriptions added by developers [62]. These sources of information are more precise and cost-efficient compared to bug reports. Therefore, we recommend the utilization of project-specific information in LLM-based APR techniques to further improve repair performance. Conference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang Comment\u2028 Buggy Code Repair Suggestion Model GPT Trigger Test Error Message CoT Repair Suggestion 1 Repair Suggestions Repair Suggestion 2 Repair Suggestion 3 \u2026 Patch Generation Model Magicoder \ud83c\udfa9 // Provide a fix \u2026 {Repair Suggestion}\u2028 // Buggy Function {Buggy Code} // Fixed Function Fixed Function Fixed Function Fixed Function \u2026 Figure 8: The SRepair framework 4.2 Models for APR Although the CodeLlama models have gained a number of plausible fixes in our study, we do observe abnormal behaviors of CodeLlama-based models in the patch generation of the function-level APR. When applied to the Java-based Defects4J dataset, CodeLlama models frequently generate patches with \u2018[PYTHON]\u2019 tags and Python code, e.g., producing 188,113 such patches in the CodeLlama 34B model. This issue was not prevalent in other models. Hence, we advocate using the high-performing GPT-3.5-Turbo and the open-source Magicoder models, both of which have shown superior capabilities in the APR task. 5 APPROACH By far, we have demonstrated the power of adopting the auxiliary repair-relevant information in the function-level LLM-based APR, i.e., including such information in the repair prompt along with the buggy function under zero-shot learning. In this section, to further leverage the potential of the auxiliary repair-relevant information, we construct a novel function-level APR technique SRepair (referring to Suggestion Repair), which adopts a dual-LLM framework for advancing the repair performance. 5.1 SRepair Framework Our Dual-LLM framework is shown in Figure 8 where SRepair first adopts a repair suggestion model which utilizes the learning power of LLM by comprehensively analyzing the auxiliary repair-relevant information via the Chain of Thought (CoT) technique [72]. Then it provides repair suggestions in natural language. Next, SRepair adopts a patch generation model which exhibits its code generation capabilities by generating the entire patched function following the repair suggestions. More specifically, we enable the CoT technique by prompting the LLM to first analyze the buggy function and project-specific information, then identify the root cause of the bug, and finally generate repair suggestions in natural language. For instance, as shown in Figure 9, the repair suggestion model first identifies the root cause of the Cli-26 bug [9]: \u2018are not being reset after creating an Option\u2019, and then generates the correct repair suggestion, \u2018use a try-finally block\u2019. Finally, such a suggestion is fed to the patch generation model for generating the patched functions. 5.2 Evaluation 5.2.1 Dataset. We use the widely studied repair benchmark of Defects4J [41] and QuixBugs [50]. Specifically, to extensively leverage SRepair\u2019s ability in the function-level APR, we include all functionlevel bugs from Defects4J 1.2 and 2.0, thereby forming a dataset Table 6: Statistics of SRepair Dataset Dataset Project # Bugs SF Bugs MF Bugs Defects4j 1.2 Chart 25 16 9 Closure 140 105 35 Lang 56 42 14 Math 102 74 28 Mockito 30 24 6 Time 22 16 6 Defects4j 2.0 Cli 30 28 2 Codec 13 11 2 Collections 2 1 1 Compress 40 36 4 Csv 13 12 1 Gson 12 9 3 JacksonCore 18 13 5 JacksonDatabind 85 67 18 JacksonXml 5 5 0 Jsoup 58 53 5 JxPath 14 10 4 Overall 665 522 143 that comprises 522 single-function (SF) bugs and an additional 143 multi-function (MF) bugs, i.e., the bugs existing in multiple functions and requiring simultaneous edits on them for a fix, as shown in Table 6. Additionally, we also evaluate on the QuixBugs dataset which is made up of 40 function-level buggy and fixed versions of classic programming problems in both Python and Java. CoT Prompt Input Analyze the buggy code, trigger test and error message Then analyze the root cause Finally, try to provide repair suggestions Comment\u2028 Buggy Code Trigger Test Error Message Root Cause: The OptionBuilder properties are not being reset after creating an Option Repair Suggestions Suggestion: Use a try-finally block to ensure OptionBuilder properties are reset Figure 9: Chain of Thought example of Cli-26 Bug How Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA Table 7: Single-function APR result of SRepair Datasets Project Plausible Fixes Correct Fixes PI(ALL) SRepair Variant AlphaRepair Repilot FitRepair ChatRepair SRepair500 GPT-3.5-Turbo Magicoder SRepair2\ud835\udc40 SRepair2\ud835\udc40+\ud835\udc39\ud835\udc3f SRepair200 SRepair500 Defects4J1.2 Chart 12 11 14 14 14 14 9 6 8 15 13 Closure 40 30 39 49 48 56 23 22 29 37 47 Lang 19 25 27 29 29 32 13 15 19 21 26 Math 48 43 50 48 47 55 21 21 24 32 42 Mockito 8 8 12 9 12 12 5 0 6 6 11 Time 7 5 5 6 6 7 3 2 3 3 7 Defects4J2.0 Cli 16 13 16 17 17 19 5 6 6 5 18 Codec 8 5 8 8 8 11 6 6 5 8 11 Collections 0 1 0 1 1 1 0 1 1 0 1 Compress 21 22 21 24 26 28 1 3 2 2 21 Csv 10 9 10 9 10 11 1 3 2 3 11 Gson 6 8 7 7 7 9 2 1 1 3 8 JacksonCore 9 6 9 7 9 10 3 3 3 3 10 JacksonDatabind 30 28 39 38 39 45 8 8 10 9 33 JacksonXml 3 1 1 3 1 3 0 0 0 1 2 Jsoup 34 35 33 35 35 39 9 18 13 14 35 JxPath 2 4 4 5 4 5 1 1 1 0 4 D4J 1.2 Total 134 122 147 155 156 176 74 66 89 114 146 D4J 2.0 Total 139 132 148 154 157 181 36 50 44 48 154 Overall 273 254 295 309 313 357 110 116 133 162 300 5.2.2 Implementation. In the SRepair implementation, GPT-3.5Turbo acts as the repair suggestion model due to its superior analytical, coding, and natural language generation abilities, especially in PI(ALL). Magicoder is adopted as the patch generation model due to its cost-effectiveness and competent code generation ability. Notably, for each repair suggestion, SRepair generates 5 patched functions via the patch generation model. We set the sample size of SRepair 200 (denoted as SRepair200) for comparing with previous APR results in our study section and 500 (denoted as SRepair500) for fair comparisons with previous APR techniques [56, 74, 75, 77]. Similar to prior work [39, 49, 77], we additionally add an end-to-end time limit of 5 hours to fix one bug. Moreover, to better repair the Python bugs in QuixBugs, we replace the Java comment symbol \u2018//\u2019 in the input APR prompt with the Python comment symbol \u2018#\u2019. It should be noted that SRepair does not require statement-level fault location information. For the rest setups, we follow our study section. Due to the page limit, we show the experimental results under different configurations and costs of SRepair in our GitHub page [1]. 5.2.3 Evaluation Metrics. Following our study section, we utilize plausible patches to reflect the repair performance. Furthermore, following standard practices in the APR research, we manually inspect each plausible patch for semantic equivalency [54, 56, 58, 76, 77] to determine the correct patches. Due to the intensive manual efforts involved in patch inspection, we conduct a cross-validation with three authors in order to filter out the correct patches generated by SRepair500. 5.2.4 Compared Techniques. We adopt four recent SOTA LLMbased APR techniques: AlphaRepair [77], Repilot [74], FitRepair [75], and ChatRepair [79]. We also adopt GPT-3.5-TurboPI(ALL) and MagicoderPI(ALL) as baselines with the same auxiliary repair-relevant information and models used in SRepair for studying the effectiveness of our Dual-LLM CoT framework. We also form two SRepair200 variants: SRepair2\ud835\udc40with Dual-LLM only, i.e., directly generating Table 8: Correct fixes on QuixBugs datasets QuixBugs SRepair500 SRepair200 ChatRepair AlphaRepair Python 40 40 40 27 Java 40 40 40 28 128 SRepair500 Repilot FitRepair ChatRepair AlphaRepair (a) Single-function dataset 35 SRepair500 Repilot FitRepair ChatRepair AlphaRepair (b) Studied baselines dataset Figure 10: Bug fixes Venn diagram of SRepair500 with studied baselines repair suggestions without CoT, and SRepair2\ud835\udc40+\ud835\udc39\ud835\udc3fwith additional statement-level fault location information for comparison. 5.2.5 Result analysis. Table 7 presents the APR results for singlefunction bugs in the Defects4J dataset. Surprisingly, we find that SRepair500 outperforms all previous LLM-based APR techniques by at least 85%. Specifically, we can observe that 68.4% of singlefunction bugs (357) in Defects4J can be plausibly fixed, and even 57.5% of bugs (300) can be correctly fixed by SRepair500. Such surprising results indicate that SRepair is capable of fixing a significant number of real-world complicated bugs in the function-level APR. Notably, repairing 300 single-function bugs with SRepair costs only $8.6, averaging $0.029 per correct fix, demonstrating its efficiency as an LLM-based APR technique. Conference\u201917, July 2017, Washington, DC, USA Jiahong Xiang, Xiaoyang Xu, Fanchu Kong, Mingyuan Wu, Haotian Zhang, and Yuqun Zhang Moreover, as in Figure 10a, SRepair500 correctly fixes 128 out of 522 single-function bugs which cannot be fixed by any of the baseline LLM-based APR techniques adopted in this paper. Interestingly, Figure 10b shows that SRepair500 also significantly outperforms the state-of-the-art APR baselines, correctly fixing 35 unique bugs that all other baselines failed to fix in their studied bugs. Such a result indicates that SRepair not only expands the repair task scope to the more practical function-level APR but also achieves remarkable repair performance without the need for statement-level fault location information. Table 8 shows that SRepair500 successfully fixes all bugs in the QuixBugs dataset, indicating its superior capability for diverse programming languages. 0 15 30 45 60 Srepair Srepair GPT-Turbo-3.5 Magicoder 11 15 25 32 17 23 42 53 Plausible Correct # Fixed Bugs 500 PI(ALL) PI(ALL) 200 Figure 11: The APR results of the multi-function bugs in the Defects4J dataset We also evaluate how SRepair repairs complicated multi-function bugs shown in Figure 11 where we find that SRepair500 (53 plausible fixes and 32 correct fixes) and SRepair200 (42 plausible fixes and 25 correct fixes) both largely outperform GPT-3.5-TurboPI(ALL) and MagicoderPI(ALL). Interestingly, as shown in Figure 12 where Functions 1 and 2 require information from successfully running Function 3 to determine if they should execute subsequent statements. This poses a significant challenge for APR techniques, as they need to simultaneously alter the return type of Function 3 to boolean and adapt the function calls in Functions 1 and 2. SRepair successfully identifies such a complex function call and generates the correct fix, indicating the power of SRepair on complicated multi-function faults, which, to our best knowledge, is the first time achieved by any APR technique ever. We further find that SRepair2\ud835\udc40outperforms both GPT-3.5-TurboPI(ALL) and MagicoderPI(ALL) by 8.1% and 16.1% in terms of the number of plausible fixes respectively. Furthermore, leveraging CoT technique achieves even better result (313 plausible fixes) than incorporating public void addPropertyCreator(\u2026) { verifyNonDup(\u2026); \u2026 } protected void verifyNonDup(\u2026) { \u2026 if (!explicit) { return; }\u2026 } public void addPropertyCreator(\u2026) { + if (verifyNonDup(\u2026)){ \u2026 + } } + protected boolean verifyNonDup(\u2026) { \u2026 if (!explicit) { + return false; } \u2026 + return true; } Function 3 (verifyNonDup) Function 2 (addPropertyCreator) Function 1 (addDelegatingCreator) public void addDelegatingCreator(\u2026) {\u2026 + if (verifyNonDup(\u2026)){ _arrayDelegateArgs = injectables; + } \u2026 } public void addDelegatingCreator(\u2026) {\u2026 verifyNonDup(\u2026); _arrayDelegateArgs = injectables; \u2026 } Figure 12: The multi-function bug JacksonDatabind-69 [13] statement-level fault localization information (309 plausible fixes). Such results indicate the effectiveness of our Dual-LLM framework and CoT mechanism in SRepair. 6 THREATS TO VALIDITY Threats to internal validity. One potential threat arises from our manual validation process, which differentiates between plausible patches and those that are semantically correct. To address this concern, three authors cross-validated the plausible patches of SRepair500 by comparing them to those created by developers (the plausible patches generated by other techniques are mostly subsets). Another threat is the potential for data leakage if the developer patches were included in the original training data. To address this, we examined all the patches generated in our study and by SRepair500 in the Defects4J dataset. Among the total plausible patches produced in our study, only 7.4\u2030 are identical to the developer patches. Similarly, for the plausible patches generated by SRepair500, only 1.5\u2030 match the developer patches. Such overlapped patches pose almost no impact on our experimental results. An additional threat lies in the trigger tests adopted in SRepair where the LLMs might have recognized the trigger tests and manipulated them to pass all tests, creating seemingly plausible patches. Our SRepair\u2019s Dual-LLM mechanism effectively mitigates this threat, as the repair suggestion model only suggests bug fixes without trigger test information, keeping the patch generation model isolated from such data. Threats to external validity. The main threat to external validity lies in our evaluation datasets used which may not well generalize our experimental results. To mitigate this, we evaluate our approach on both the popular Defects4J 1.2 and 2.0 datasets where we include all their single-function bugs in our study. Furthermore, we extend our investigation to multi-function bugs in our SRepair evaluation. We also evaluate SRepair on the QuixBugs datasets, which contain both Java and Python bugs, to validate its generalizability. Threats to construct validity. The threat to construct validity mainly lies in the metrics used. To mitigate this, we adopt the widely-used plausible patches along with their distributions. We also use correct fix to evaluate our approach SRepair. 7 CONCLUSION In this paper, we conduct the first comprehensive study on the function-level LLM-based APR. Our study reveals that LLMs with zero-shot learning are powerful function-level APR techniques. Moreover, directly applying the auxiliary repair-relevant information to LLMs significantly increases the function-level repair performance. Inspired by our findings, we design a Dual-LLM framework utilizing Chain of Thought technique, named SRepair, which achieves remarkable repair performance by correctly fixing 300 single-function bugs in the Defects4J dataset, surpassing ChatRepair [79] by 85% and Repilot [74] by 1.59\u00d7. Notably, SRepair successfully fixes 32 multi-function bugs, which is the first time achieved by any APR technique ever to our best knowledge. DATA AVAILABILITY The data and code are available at GitHub [1] for public evaluation. How Far Can We Go with Practical Function-Level Program Repair? Conference\u201917, July 2017, Washington, DC, USA" + }, + { + "url": "http://arxiv.org/abs/2302.13971v1", + "title": "LLaMA: Open and Efficient Foundation Language Models", + "abstract": "We introduce LLaMA, a collection of foundation language models ranging from\n7B to 65B parameters. We train our models on trillions of tokens, and show that\nit is possible to train state-of-the-art models using publicly available\ndatasets exclusively, without resorting to proprietary and inaccessible\ndatasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks,\nand LLaMA-65B is competitive with the best models, Chinchilla-70B and\nPaLM-540B. We release all our models to the research community.", + "authors": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.13549v2", + "title": "A Survey on Multimodal Large Language Models", + "abstract": "Recently, Multimodal Large Language Model (MLLM) represented by GPT-4V has\nbeen a new rising research hotspot, which uses powerful Large Language Models\n(LLMs) as a brain to perform multimodal tasks. The surprising emergent\ncapabilities of MLLM, such as writing stories based on images and OCR-free math\nreasoning, are rare in traditional multimodal methods, suggesting a potential\npath to artificial general intelligence. To this end, both academia and\nindustry have endeavored to develop MLLMs that can compete with or even better\nthan GPT-4V, pushing the limit of research at a surprising speed. In this\npaper, we aim to trace and summarize the recent progress of MLLMs. First of\nall, we present the basic formulation of MLLM and delineate its related\nconcepts, including architecture, training strategy and data, as well as\nevaluation. Then, we introduce research topics about how MLLMs can be extended\nto support more granularity, modalities, languages, and scenarios. We continue\nwith multimodal hallucination and extended techniques, including Multimodal ICL\n(M-ICL), Multimodal CoT (M-CoT), and LLM-Aided Visual Reasoning (LAVR). To\nconclude the paper, we discuss existing challenges and point out promising\nresearch directions. In light of the fact that the era of MLLM has only just\nbegun, we will keep updating this survey and hope it can inspire more research.\nAn associated GitHub link collecting the latest papers is available at\nhttps://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models.", + "authors": "Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, Enhong Chen", + "published": "2023-06-23", + "updated": "2024-04-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.10583v4", + "title": "Automated Repair of Programs from Large Language Models", + "abstract": "Large language models such as Codex, have shown the capability to produce\ncode for many programming tasks. However, the success rate of existing models\nis low, especially for complex programming tasks. One of the reasons is that\nlanguage models lack awareness of program semantics, resulting in incorrect\nprograms, or even programs which do not compile. In this paper, we\nsystematically study whether automated program repair (APR) techniques can fix\nthe incorrect solutions produced by language models in LeetCode contests. The\ngoal is to study whether APR techniques can enhance reliability in the code\nproduced by large language models. Our study revealed that: (1) automatically\ngenerated code shares common programming mistakes with human-crafted solutions,\nindicating APR techniques may have potential to fix auto-generated code; (2)\ngiven bug location information provided by a statistical fault localization\napproach, the newly released Codex edit mode, which supports editing code, is\nsimilar to or better than existing Java repair tools TBar and Recoder in fixing\nincorrect solutions. By analyzing the experimental results generated by these\ntools, we provide several suggestions: (1) enhancing APR tools to surpass\nlimitations in patch space (e.g., introducing more flexible fault localization)\nis desirable; (2) as large language models can derive more fix patterns by\ntraining on more data, future APR tools could shift focus from adding more fix\npatterns to synthesis/semantics based approaches, (3) combination of language\nmodels with APR to curate patch ingredients, is worth studying.", + "authors": "Zhiyu Fan, Xiang Gao, Martin Mirchev, Abhik Roychoudhury, Shin Hwei Tan", + "published": "2022-05-21", + "updated": "2023-01-02", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.09308v1", + "title": "GAMMA: Revisiting Template-based Automated Program Repair via Mask Prediction", + "abstract": "Automated program repair (APR) aims to fix software bugs without human\nintervention and template-based APR has been widely investigated with promising\nresults. However, it is challenging for template-based APR to select the\nappropriate donor code, which is an important repair ingredient for generating\ncandidate patches. Inappropriate donor code may cause plausible but incorrect\npatch generation even with correct fix patterns, limiting the repair\nperformance.\n In this paper, we aim to revisit template-based APR, and propose GAMMA, to\ndirectly leverage large pre-trained language models for donor code generation.\nOur main insight is that instead of retrieving donor code in the local buggy\nfile, we can directly predict the correct code tokens based on the context code\nsnippets and repair patterns by a cloze task. Specifically, (1) GAMMA revises a\nvariety of fix templates from state-of-the-art template-based APR techniques\n(i.e., TBar) and transforms them into mask patterns. (2) GAMMA adopts a\npre-trained language model to predict the correct code for masked code as a\nfill-in-the-blank task. The experimental results demonstrate that GAMMA\ncorrectly repairs 82 bugs on Defects4J-v1.2, which achieves 20.59\\% (14 bugs)\nand 26.15\\% (17 bugs) improvement over the previous state-of-the-art\ntemplate-based approach TBar and learning-based one Recoder. Furthermore, GAMMA\nrepairs 45 bugs and 22 bugs from the additional Defects4J-v2.0 and QuixBugs,\nindicating the generalizability of GAMMA in addressing the dataset overfitting\nissue. We also prove that adopting other pre-trained language models can\nprovide substantial advancement, e.g., CodeBERT-based and ChatGPT-based GAMMA\nis able to fix 80 and 67 bugs on Defects4J-v1.2, indicating the scalability of\nGAMMA. Overall, our study highlights the promising future of adopting\npre-trained models to generate correct patches on top of fix patterns.", + "authors": "Quanjun Zhang, Chunrong Fang, Tongke Zhang, Bowen Yu, Weisong Sun, Zhenyu Chen", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.08281v2", + "title": "Less Training, More Repairing Please: Revisiting Automated Program Repair via Zero-shot Learning", + "abstract": "Due to the promising future of Automated Program Repair (APR), researchers\nhave proposed various APR techniques, including heuristic-based,\ntemplate-based, and constraint-based techniques. Among such classic APR\ntechniques, template-based techniques have been widely recognized as state of\nthe art. However, such template-based techniques require predefined templates\nto perform repair, and their effectiveness is thus limited. To this end,\nresearchers leveraged the recent advances in Deep Learning to further improve\nAPR. Such learning-based techniques view APR as a Neural Machine Translation\nproblem, using the buggy/fixed code snippets as the source/target languages for\ntranslation. In this way, such techniques heavily rely on large numbers of\nhigh-quality bug-fixing commits, which can be extremely costly and challenging\nto construct. Furthermore, the edit variety of these learning-based techniques\nare limited to the available bug-fixes within their training datasets.\nTherefore, in this paper, we aim to revisit the learning-based APR problem, and\npropose AlphaRepair, to leverage zero-shot learning directly using large\npre-trained code models for APR. Our main insight is instead of modeling what a\nrepair edit should look like, we can directly predict what the correct code is\nbased on the context information. We have implemented AlphaRepair as a\npractical multilingual APR tool based on the recent CodeBERT model. Our results\non the widely used Defects4J benchmark show that AlphaRepair can substantially\noutperform state-of-the-art APR tools. We also studied the impact of different\ndesign choices and show that AlphaRepair performs even better on a newer\nversion of Defects4J (2.0) with 3.3X more fixes than best performing baseline,\nindicating that AlphaRepair can potentially avoid the dataset-overfitting issue\nof existing learning-based techniques.", + "authors": "Chunqiu Steven Xia, Lingming Zhang", + "published": "2022-07-17", + "updated": "2022-07-26", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.00073v4", + "title": "CURE: Code-Aware Neural Machine Translation for Automatic Program Repair", + "abstract": "Automatic program repair (APR) is crucial to improve software reliability.\nRecently, neural machine translation (NMT) techniques have been used to fix\nsoftware bugs automatically. While promising, these approaches have two major\nlimitations. Their search space often does not contain the correct fix, and\ntheir search strategy ignores software knowledge such as strict code syntax.\nDue to these limitations, existing NMT-based techniques underperform the best\ntemplate-based approaches.\n We propose CURE, a new NMT-based APR technique with three major novelties.\nFirst, CURE pre-trains a programming language (PL) model on a large software\ncodebase to learn developer-like source code before the APR task. Second, CURE\ndesigns a new code-aware search strategy that finds more correct fixes by\nfocusing on compilable patches and patches that are close in length to the\nbuggy code. Finally, CURE uses a subword tokenization technique to generate a\nsmaller search space that contains more correct fixes.\n Our evaluation on two widely-used benchmarks shows that CURE correctly fixes\n57 Defects4J bugs and 26 QuixBugs bugs, outperforming all existing APR\ntechniques on both benchmarks.", + "authors": "Nan Jiang, Thibaud Lutellier, Lin Tan", + "published": "2021-02-26", + "updated": "2021-09-02", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.12307v3", + "title": "LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models", + "abstract": "We present LongLoRA, an efficient fine-tuning approach that extends the\ncontext sizes of pre-trained large language models (LLMs), with limited\ncomputation cost. Typically, training LLMs with long context sizes is\ncomputationally expensive, requiring extensive training hours and GPU\nresources. For example, training on the context length of 8192 needs 16x\ncomputational costs in self-attention layers as that of 2048. In this paper, we\nspeed up the context extension of LLMs in two aspects. On the one hand,\nalthough dense global attention is needed during inference, fine-tuning the\nmodel can be effectively and efficiently done by sparse local attention. The\nproposed shifted sparse attention effectively enables context extension,\nleading to non-trivial computation saving with similar performance to\nfine-tuning with vanilla attention. Particularly, it can be implemented with\nonly two lines of code in training, while being optional in inference. On the\nother hand, we revisit the parameter-efficient fine-tuning regime for context\nexpansion. Notably, we find that LoRA for context extension works well under\nthe premise of trainable embedding and normalization. LongLoRA combines this\nimproved LoRA with S^2-Attn. LongLoRA demonstrates strong empirical results on\nvarious tasks on Llama2 models from 7B/13B to 70B. LongLoRA extends Llama2 7B\nfrom 4k context to 100k, or Llama2 70B to 32k on a single 8x A100 machine.\nLongLoRA extends models' context while retaining their original architectures,\nand is compatible with most existing techniques, like Flash-Attention2. In\naddition, we further conduct supervised fine-tuning with LongLoRA and our long\ninstruction-following LongAlpaca dataset.", + "authors": "Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, Jiaya Jia", + "published": "2023-09-21", + "updated": "2024-03-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1409.3215v3", + "title": "Sequence to Sequence Learning with Neural Networks", + "abstract": "Deep Neural Networks (DNNs) are powerful models that have achieved excellent\nperformance on difficult learning tasks. Although DNNs work well whenever large\nlabeled training sets are available, they cannot be used to map sequences to\nsequences. In this paper, we present a general end-to-end approach to sequence\nlearning that makes minimal assumptions on the sequence structure. Our method\nuses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to\na vector of a fixed dimensionality, and then another deep LSTM to decode the\ntarget sequence from the vector. Our main result is that on an English to\nFrench translation task from the WMT'14 dataset, the translations produced by\nthe LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's\nBLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did\nnot have difficulty on long sentences. For comparison, a phrase-based SMT\nsystem achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM\nto rerank the 1000 hypotheses produced by the aforementioned SMT system, its\nBLEU score increases to 36.5, which is close to the previous best result on\nthis task. The LSTM also learned sensible phrase and sentence representations\nthat are sensitive to word order and are relatively invariant to the active and\nthe passive voice. Finally, we found that reversing the order of the words in\nall source sentences (but not target sentences) improved the LSTM's performance\nmarkedly, because doing so introduced many short term dependencies between the\nsource and the target sentence which made the optimization problem easier.", + "authors": "Ilya Sutskever, Oriol Vinyals, Quoc V. Le", + "published": "2014-09-10", + "updated": "2014-12-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.12689v2", + "title": "Large Language Models are Few-Shot Clinical Information Extractors", + "abstract": "A long-running goal of the clinical NLP community is the extraction of\nimportant variables trapped in clinical notes. However, roadblocks have\nincluded dataset shift from the general domain and a lack of public clinical\ncorpora and annotations. In this work, we show that large language models, such\nas InstructGPT, perform well at zero- and few-shot information extraction from\nclinical text despite not being trained specifically for the clinical domain.\nWhereas text classification and generation performance have already been\nstudied extensively in such models, here we additionally demonstrate how to\nleverage them to tackle a diverse set of NLP tasks which require more\nstructured outputs, including span identification, token-level sequence\nclassification, and relation extraction. Further, due to the dearth of\navailable data to evaluate these systems, we introduce new datasets for\nbenchmarking few-shot clinical information extraction based on a manual\nre-annotation of the CASI dataset for new tasks. On the clinical extraction\ntasks we studied, the GPT-3 systems significantly outperform existing zero- and\nfew-shot baselines.", + "authors": "Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, David Sontag", + "published": "2022-05-25", + "updated": "2022-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.11515v3", + "title": "Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction", + "abstract": "Many automated test generation techniques have been developed to aid\ndevelopers with writing tests. To facilitate full automation, most existing\ntechniques aim to either increase coverage, or generate exploratory inputs.\nHowever, existing test generation techniques largely fall short of achieving\nmore semantic objectives, such as generating tests to reproduce a given bug\nreport. Reproducing bugs is nonetheless important, as our empirical study shows\nthat the number of tests added in open source repositories due to issues was\nabout 28% of the corresponding project test suite size. Meanwhile, due to the\ndifficulties of transforming the expected program semantics in bug reports into\ntest oracles, existing failure reproduction techniques tend to deal exclusively\nwith program crashes, a small subset of all bug reports. To automate test\ngeneration from general bug reports, we propose LIBRO, a framework that uses\nLarge Language Models (LLMs), which have been shown to be capable of performing\ncode-related tasks. Since LLMs themselves cannot execute the target buggy code,\nwe focus on post-processing steps that help us discern when LLMs are effective,\nand rank the produced tests according to their validity. Our evaluation of\nLIBRO shows that, on the widely studied Defects4J benchmark, LIBRO can generate\nfailure reproducing test cases for 33% of all studied cases (251 out of 750),\nwhile suggesting a bug reproducing test in first place for 149 bugs. To\nmitigate data contamination, we also evaluate LIBRO against 31 bug reports\nsubmitted after the collection of the LLM training data terminated: LIBRO\nproduces bug reproducing tests for 32% of the studied bug reports. Overall, our\nresults show LIBRO has the potential to significantly enhance developer\nefficiency by automatically generating tests from bug reports.", + "authors": "Sungmin Kang, Juyeon Yoon, Shin Yoo", + "published": "2022-09-23", + "updated": "2023-07-25", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.14834v4", + "title": "Large Language Models are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models", + "abstract": "Detecting bugs in Deep Learning (DL) libraries (e.g., TensorFlow/PyTorch) is\ncritical for almost all downstream DL systems in ensuring effectiveness/safety\nfor end users. Meanwhile, traditional fuzzing techniques can be hardly\neffective for such a challenging domain since the input DL programs need to\nsatisfy both the input language (e.g., Python) syntax/semantics and the DL API\ninput/shape constraints for tensor computations.\n To address these limitations, we propose TitanFuzz - the first approach to\ndirectly leveraging Large Language Models (LLMs) to generate input programs for\nfuzzing DL libraries. LLMs are titanic models trained on billions of code\nsnippets and can auto-regressively generate human-like code snippets. Our key\ninsight is that modern LLMs can also include numerous code snippets invoking DL\nlibrary APIs in their training corpora, and thus can implicitly learn both\nlanguage syntax/semantics and intricate DL API constraints for valid DL program\ngeneration. More specifically, we use both generative and infilling LLMs (e.g.,\nCodex/InCoder) to generate and mutate valid/diverse input DL programs for\nfuzzing. Our experimental results demonstrate that TitanFuzz can achieve\n30.38%/50.84% higher code coverage than state-of-the-art fuzzers on\nTensorFlow/PyTorch. Furthermore, TitanFuzz is able to detect 65 bugs, with 41\nalready confirmed as previously unknown bugs.\n This paper demonstrates that modern titanic LLMs can be leveraged to directly\nperform both generation-based and mutation-based fuzzing studied for decades,\nwhile being fully automated, generalizable, and applicable to domains\nchallenging for traditional approaches (such as DL systems). We hope TitanFuzz\ncan stimulate more work in this promising direction of LLMs for fuzzing.", + "authors": "Yinlin Deng, Chunqiu Steven Xia, Haoran Peng, Chenyuan Yang, Lingming Zhang", + "published": "2022-12-30", + "updated": "2023-03-07", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.02120v1", + "title": "Magicoder: Source Code Is All You Need", + "abstract": "We introduce Magicoder, a series of fully open-source (code, weights, and\ndata) Large Language Models (LLMs) for code that significantly closes the gap\nwith top code models while having no more than 7B parameters. Magicoder models\nare trained on 75K synthetic instruction data using OSS-Instruct, a novel\napproach to enlightening LLMs with open-source code snippets to generate\nhigh-quality instruction data for code. Our main motivation is to mitigate the\ninherent bias of the synthetic data generated by LLMs by empowering them with a\nwealth of open-source references for the production of more diverse, realistic,\nand controllable data. The orthogonality of OSS-Instruct and other data\ngeneration methods like Evol-Instruct further enables us to build an enhanced\nMagicoderS. Both Magicoder and MagicoderS substantially outperform\nstate-of-the-art code models with similar or even larger sizes on a wide range\nof coding benchmarks, including Python text-to-code generation, multilingual\ncoding, and data-science program completion. Notably, MagicoderS-CL-7B based on\nCodeLlama even surpasses the prominent ChatGPT on HumanEval+ (66.5 vs. 65.9 in\npass@1). Overall, OSS-Instruct opens a new direction for low-bias and\nhigh-quality instruction tuning using abundant open-source references.", + "authors": "Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, Lingming Zhang", + "published": "2023-12-04", + "updated": "2023-12-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.07263v1", + "title": "InferFix: End-to-End Program Repair with LLMs", + "abstract": "Software development life cycle is profoundly influenced by bugs: their\nintroduction, identification, and eventual resolution account for a significant\nportion of software cost. This has motivated software engineering researchers\nand practitioners to propose different approaches for automating the\nidentification and repair of software defects. Large language models have been\nadapted to the program repair task through few-shot demonstration learning and\ninstruction prompting, treating this as an infilling task. However, these\nmodels have only focused on learning general bug-fixing patterns for\nuncategorized bugs mined from public repositories. In this paper, we propose\nInferFix: a transformer-based program repair framework paired with a\nstate-of-the-art static analyzer to fix critical security and performance bugs.\nInferFix combines a Retriever -- transformer encoder model pretrained via\ncontrastive learning objective, which aims at searching for semantically\nequivalent bugs and corresponding fixes; and a Generator -- a large language\nmodel (Codex Cushman) finetuned on supervised bug-fix data with prompts\naugmented via bug type annotations and semantically similar fixes retrieved\nfrom an external non-parametric memory. To train and evaluate our approach, we\ncurated InferredBugs, a novel, metadata-rich dataset of bugs extracted by\nexecuting the Infer static analyzer on the change histories of thousands of\nJava and C# repositories. Our evaluation demonstrates that InferFix outperforms\nstrong LLM baselines, with a top-1 accuracy of 65.6% for generating fixes in C#\nand 76.8% in Java. We discuss the deployment of InferFix alongside Infer at\nMicrosoft which offers an end-to-end solution for detection, classification,\nand localization of bugs, as well as fixing and validation of candidate\npatches, integrated in the continuous integration pipeline to automate the\nsoftware development workflow.", + "authors": "Matthew Jin, Syed Shahriar, Michele Tufano, Xin Shi, Shuai Lu, Neel Sundaresan, Alexey Svyatkovskiy", + "published": "2023-03-13", + "updated": "2023-03-13", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.10494v1", + "title": "Revisiting the Plastic Surgery Hypothesis via Large Language Models", + "abstract": "Automated Program Repair (APR) aspires to automatically generate patches for\nan input buggy program. Traditional APR tools typically focus on specific bug\ntypes and fixes through the use of templates, heuristics, and formal\nspecifications. However, these techniques are limited in terms of the bug types\nand patch variety they can produce. As such, researchers have designed various\nlearning-based APR tools with recent work focused on directly using Large\nLanguage Models (LLMs) for APR. While LLM-based APR tools are able to achieve\nstate-of-the-art performance on many repair datasets, the LLMs used for direct\nrepair are not fully aware of the project-specific information such as unique\nvariable or method names.\n The plastic surgery hypothesis is a well-known insight for APR, which states\nthat the code ingredients to fix the bug usually already exist within the same\nproject. Traditional APR tools have largely leveraged the plastic surgery\nhypothesis by designing manual or heuristic-based approaches to exploit such\nexisting code ingredients. However, as recent APR research starts focusing on\nLLM-based approaches, the plastic surgery hypothesis has been largely ignored.\nIn this paper, we ask the following question: How useful is the plastic surgery\nhypothesis in the era of LLMs? Interestingly, LLM-based APR presents a unique\nopportunity to fully automate the plastic surgery hypothesis via fine-tuning\nand prompting. To this end, we propose FitRepair, which combines the direct\nusage of LLMs with two domain-specific fine-tuning strategies and one prompting\nstrategy for more powerful APR. Our experiments on the widely studied Defects4j\n1.2 and 2.0 datasets show that FitRepair fixes 89 and 44 bugs (substantially\noutperforming the best-performing baseline by 15 and 8), respectively,\ndemonstrating a promising future of the plastic surgery hypothesis in the era\nof LLMs.", + "authors": "Chunqiu Steven Xia, Yifeng Ding, Lingming Zhang", + "published": "2023-03-18", + "updated": "2023-03-18", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.00385v1", + "title": "Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT", + "abstract": "Automated Program Repair (APR) aims to automatically generate patches for\nbuggy programs. Recent APR work has been focused on leveraging modern Large\nLanguage Models (LLMs) to directly generate patches for APR. Such LLM-based APR\ntools work by first constructing an input prompt built using the original buggy\ncode and then queries the LLM to generate patches. While the LLM-based APR\ntools are able to achieve state-of-the-art results, it still follows the\nclassic Generate and Validate repair paradigm of first generating lots of\npatches and then validating each one afterwards. This not only leads to many\nrepeated patches that are incorrect but also miss the crucial information in\ntest failures as well as in plausible patches.\n To address these limitations, we propose ChatRepair, the first fully\nautomated conversation-driven APR approach that interleaves patch generation\nwith instant feedback to perform APR in a conversational style. ChatRepair\nfirst feeds the LLM with relevant test failure information to start with, and\nthen learns from both failures and successes of earlier patching attempts of\nthe same bug for more powerful APR. For earlier patches that failed to pass all\ntests, we combine the incorrect patches with their corresponding relevant test\nfailure information to construct a new prompt for the LLM to generate the next\npatch. In this way, we can avoid making the same mistakes. For earlier patches\nthat passed all the tests, we further ask the LLM to generate alternative\nvariations of the original plausible patches. In this way, we can further build\non and learn from earlier successes to generate more plausible patches to\nincrease the chance of having correct patches. While our approach is general,\nwe implement ChatRepair using state-of-the-art dialogue-based LLM -- ChatGPT.\nBy calculating the cost of accessing ChatGPT, we can fix 162 out of 337 bugs\nfor \\$0.42 each!", + "authors": "Chunqiu Steven Xia, Lingming Zhang", + "published": "2023-04-01", + "updated": "2023-04-01", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.14179v1", + "title": "Practical Program Repair in the Era of Large Pre-trained Language Models", + "abstract": "Automated Program Repair (APR) aims to help developers automatically patch\nsoftware bugs. However, current state-of-the-art traditional and learning-based\nAPR techniques face the problem of limited patch variety, failing to fix\ncomplicated bugs. This is mainly due to the reliance on bug-fixing datasets to\ncraft fix templates or directly predict potential patches. Large Pre-Trained\nLanguage Models (PLMs), trained using billions of text/code tokens, can\npotentially help avoid this issue. Very recently, researchers have directly\nleveraged PLMs for APR without relying on any bug-fixing datasets. Meanwhile,\nsuch existing work either failed to include state-of-the-art PLMs or was not\nevaluated on realistic datasets.\n In this work, we perform the first extensive study on directly applying PLMs\nfor APR. We select 9 recent state-of-the-art PLMs, including both generative\nand infilling models, ranging from 125M to 20B in size. We designed 3 different\nrepair settings to evaluate the different ways we can use PLMs to generate\npatches. We apply the PLMs under these repair settings on 5 datasets across 3\ndifferent languages and compare different PLMs in the number of bugs fixed,\ngeneration speed and compilation rate. Our study demonstrates that directly\napplying state-of-the-art PLMs can already substantially outperform all\nexisting APR techniques on all our datasets. Among the studied PLMs, the\nscaling effect exists for APR where larger models tend to achieve better\nperformance. Also, we show for the first time that suffix code after the buggy\nline (adopted in infilling-style APR) is important in not only generating more\nfixes but more patches with higher compilation rate. Besides patch generation,\nthe PLMs consider correct patches to be more natural than other ones, and can\neven be leveraged for effective patch ranking or patch correctness checking.", + "authors": "Chunqiu Steven Xia, Yuxiang Wei, Lingming Zhang", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.00608v3", + "title": "Copiloting the Copilots: Fusing Large Language Models with Completion Engines for Automated Program Repair", + "abstract": "During Automated Program Repair (APR), it can be challenging to synthesize\ncorrect patches for real-world systems in general-purpose programming\nlanguages. Recent Large Language Models (LLMs) have been shown to be helpful\n\"copilots\" in assisting developers with various coding tasks, and have also\nbeen directly applied for patch synthesis. However, most LLMs treat programs as\nsequences of tokens, meaning that they are ignorant of the underlying semantics\nconstraints of the target programming language. This results in plenty of\nstatically invalid generated patches, impeding the practicality of the\ntechnique. Therefore, we propose Repilot, a general code generation framework\nto further copilot the AI \"copilots\" (i.e., LLMs) by synthesizing more valid\npatches during the repair process. Our key insight is that many LLMs produce\noutputs autoregressively (i.e., token by token), resembling human writing\nprograms, which can be significantly boosted and guided through a Completion\nEngine. Repilot synergistically synthesizes a candidate patch through the\ninteraction between an LLM and a Completion Engine, which 1) prunes away\ninfeasible tokens suggested by the LLM and 2) proactively completes the token\nbased on the suggestions provided by the Completion Engine. Our evaluation on a\nsubset of the widely-used Defects4j 1.2 and 2.0 datasets shows that Repilot\noutperforms state-of-the-art techniques by fixing 27% and 47% more bugs,\nrespectively. Moreover, Repilot produces more valid and correct patches than\nthe base LLM with the same budget. While we focus on leveraging Repilot for APR\nin this work, the overall approach is also generalizable to other code\ngeneration tasks.", + "authors": "Yuxiang Wei, Chunqiu Steven Xia, Lingming Zhang", + "published": "2023-09-01", + "updated": "2023-11-08", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.LG", + "cs.PL", + "I.2.2; D.2.5" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.05583v3", + "title": "How to Fine-Tune BERT for Text Classification?", + "abstract": "Language model pre-training has proven to be useful in learning universal\nlanguage representations. As a state-of-the-art language model pre-training\nmodel, BERT (Bidirectional Encoder Representations from Transformers) has\nachieved amazing results in many language understanding tasks. In this paper,\nwe conduct exhaustive experiments to investigate different fine-tuning methods\nof BERT on text classification task and provide a general solution for BERT\nfine-tuning. Finally, the proposed solution obtains new state-of-the-art\nresults on eight widely-studied text classification datasets.", + "authors": "Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang", + "published": "2019-05-14", + "updated": "2020-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1602.05643v1", + "title": "An Analysis of the Search Spaces for Generate and Validate Patch Generation Systems", + "abstract": "We present the first systematic analysis of the characteristics of patch\nsearch spaces for automatic patch generation systems. We analyze the search\nspaces of two current state-of-the-art systems, SPR and Prophet, with 16\ndifferent search space configurations. Our results are derived from an analysis\nof 1104 different search spaces and 768 patch generation executions. Together\nthese experiments consumed over 9000 hours of CPU time on Amazon EC2.\n The analysis shows that 1) correct patches are sparse in the search spaces\n(typically at most one correct patch per search space per defect), 2) incorrect\npatches that nevertheless pass all of the test cases in the validation test\nsuite are typically orders of magnitude more abundant, and 3) leveraging\ninformation other than the test suite is therefore critical for enabling the\nsystem to successfully isolate correct patches.\n We also characterize a key tradeoff in the structure of the search spaces.\nLarger and richer search spaces that contain correct patches for more defects\ncan actually cause systems to find fewer, not more, correct patches. We\nidentify two reasons for this phenomenon: 1) increased validation times because\nof the presence of more candidate patches and 2) more incorrect patches that\npass the test suite and block the discovery of correct patches. These\nfundamental properties, which are all characterized for the first time in this\npaper, help explain why past systems often fail to generate correct patches and\nhelp identify challenges, opportunities, and productive future directions for\nthe field.", + "authors": "Fan Long, Martin Rinard", + "published": "2016-02-18", + "updated": "2016-02-18", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "D.2.5" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.02014v1", + "title": "Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT", + "abstract": "Deep Learning (DL) library bugs affect downstream DL applications,\nemphasizing the need for reliable systems. Generating valid input programs for\nfuzzing DL libraries is challenging due to the need for satisfying both\nlanguage syntax/semantics and constraints for constructing valid computational\ngraphs. Recently, the TitanFuzz work demonstrates that modern Large Language\nModels (LLMs) can be directly leveraged to implicitly learn all the constraints\nto generate valid DL programs for fuzzing. However, LLMs tend to generate\nordinary programs following similar patterns seen in their massive training\ncorpora, while fuzzing favors unusual inputs that cover edge cases or are\nunlikely to be manually produced.\n To fill this gap, this paper proposes FuzzGPT, the first technique to prime\nLLMs to synthesize unusual programs for fuzzing. FuzzGPT is built on the\nwell-known hypothesis that historical bug-triggering programs may include\nrare/valuable code ingredients important for bug finding. Traditional\ntechniques leveraging such historical information require intensive human\nefforts to design dedicated generators and ensure the validity of generated\nprograms. FuzzGPT demonstrates that this process can be fully automated via the\nintrinsic capabilities of LLMs (including fine-tuning and in-context learning),\nwhile being generalizable and applicable to challenging domains. While FuzzGPT\ncan be applied with different LLMs, this paper focuses on the powerful\nGPT-style models: Codex and CodeGen. Moreover, FuzzGPT also shows the potential\nof directly leveraging the instruct-following capability of the recent ChatGPT\nfor effective fuzzing. Evaluation on two popular DL libraries (PyTorch and\nTensorFlow) shows that FuzzGPT can substantially outperform TitanFuzz,\ndetecting 76 bugs, with 49 already confirmed as previously unknown bugs,\nincluding 11 high-priority bugs or security vulnerabilities.", + "authors": "Yinlin Deng, Chunqiu Steven Xia, Chenyuan Yang, Shizhuo Dylan Zhang, Shujing Yang, Lingming Zhang", + "published": "2023-04-04", + "updated": "2023-04-04", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.00923v4", + "title": "Multimodal Chain-of-Thought Reasoning in Language Models", + "abstract": "Large language models (LLMs) have shown impressive performance on complex\nreasoning by leveraging chain-of-thought (CoT) prompting to generate\nintermediate reasoning chains as the rationale to infer the answer. However,\nexisting CoT studies have focused on the language modality. We propose\nMultimodal-CoT that incorporates language (text) and vision (images) modalities\ninto a two-stage framework that separates rationale generation and answer\ninference. In this way, answer inference can leverage better generated\nrationales that are based on multimodal information. With Multimodal-CoT, our\nmodel under 1 billion parameters outperforms the previous state-of-the-art LLM\n(GPT-3.5) by 16 percentage points (75.17%->91.68% accuracy) on the ScienceQA\nbenchmark and even surpasses human performance. Code is publicly available\navailable at https://github.com/amazon-science/mm-cot.", + "authors": "Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola", + "published": "2023-02-02", + "updated": "2023-02-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.06641v3", + "title": "FDG: A Precise Measurement of Fault Diagnosability Gain of Test Cases", + "abstract": "The performance of many Fault Localisation (FL) techniques directly depends\non the quality of the used test suites. Consequently, it is extremely useful to\nbe able to precisely measure how much diagnostic power each test case can\nintroduce when added to a test suite used for FL. Such a measure can help us\nnot only to prioritise and select test cases to be used for FL, but also to\neffectively augment test suites that are too weak to be used with FL\ntechniques. We propose FDG, a new measure of Fault Diagnosability Gain for\nindividual test cases. The design of FDG is based on our analysis of existing\nmetrics that are designed to prioritise test cases for better FL. Unlike other\nmetrics, FDG exploits the ongoing FL results to emphasise the parts of the\nprogram for which more information is needed. Our evaluation of FDG with\nDefects4J shows that it can successfully help the augmentation of test suites\nfor better FL. When given only a few failing test cases (2.3 test cases on\naverage), FDG can effectively augment the given test suite by prioritising the\ntest cases generated automatically by EvoSuite: the augmentation can improve\nthe acc@1 and acc@10 of the FL results by 11.6x and 2.2x on average, after\nrequiring only ten human judgements on the correctness of the assertions\nEvoSuite generates.", + "authors": "Gabin An, Shin Yoo", + "published": "2021-04-14", + "updated": "2022-05-24", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.12755v3", + "title": "SelfAPR: Self-supervised Program Repair with Test Execution Diagnostics", + "abstract": "Learning-based program repair has achieved good results in a recent series of\npapers. Yet, we observe that the related work fails to repair some bugs because\nof a lack of knowledge about 1) the application domain of the program being\nrepaired, and 2) the fault type being repaired. In this paper, we solve both\nproblems by changing the learning paradigm from supervised training to\nself-supervised training in an approach called SelfAPR. First, SelfAPR\ngenerates training samples on disk by perturbing a previous version of the\nprogram being repaired, enforcing the neural model to capture projectspecific\nknowledge. This is different from the previous work based on mined past\ncommits. Second, SelfAPR executes all training samples and extracts and encodes\ntest execution diagnostics into the input representation, steering the neural\nmodel to fix the kind of fault. This is different from the existing studies\nthat only consider static source code as input. We implement SelfAPR and\nevaluate it in a systematic manner. We generate 1 039 873 training samples\nobtained by perturbing 17 open-source projects. We evaluate SelfAPR on 818 bugs\nfrom Defects4J, SelfAPR correctly repairs 110 of them, outperforming all the\nsupervised learning repair approaches.", + "authors": "He Ye, Matias Martinez, Xiapu Luo, Tao Zhang, Martin Monperrus", + "published": "2022-03-23", + "updated": "2022-09-03", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1706.03762v7", + "title": "Attention Is All You Need", + "abstract": "The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training data.", + "authors": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin", + "published": "2017-06-12", + "updated": "2023-08-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.13784v1", + "title": "Iteratively Prompting Multimodal LLMs to Reproduce Natural and AI-Generated Images", + "abstract": "With the digital imagery landscape rapidly evolving, image stocks and\nAI-generated image marketplaces have become central to visual media.\nTraditional stock images now exist alongside innovative platforms that trade in\nprompts for AI-generated visuals, driven by sophisticated APIs like DALL-E 3\nand Midjourney. This paper studies the possibility of employing multi-modal\nmodels with enhanced visual understanding to mimic the outputs of these\nplatforms, introducing an original attack strategy. Our method leverages\nfine-tuned CLIP models, a multi-label classifier, and the descriptive\ncapabilities of GPT-4V to create prompts that generate images similar to those\navailable in marketplaces and from premium stock image providers, yet at a\nmarkedly lower expense. In presenting this strategy, we aim to spotlight a new\nclass of economic and security considerations within the realm of digital\nimagery. Our findings, supported by both automated metrics and human\nassessment, reveal that comparable visual content can be produced for a\nfraction of the prevailing market prices ($0.23 - $0.27 per image), emphasizing\nthe need for awareness and strategic discussions about the integrity of digital\nmedia in an increasingly AI-integrated landscape. Our work also contributes to\nthe field by assembling a dataset consisting of approximately 19 million\nprompt-image pairs generated by the popular Midjourney platform, which we plan\nto release publicly.", + "authors": "Ali Naseh, Katherine Thai, Mohit Iyyer, Amir Houmansadr", + "published": "2024-04-21", + "updated": "2024-04-21", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CL", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "2.1 Text-to-Image Generation Models The journey of text-to-image generation began with methods based on Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). These GAN-based models paved the way for the field, focusing on synthesizing visual content from textual descriptions (Reed et al., 2016; Mansimov et al., 2015; Zhang et al., 2017; Xu et al., 2018; Zhu et al., 2019). In recent years, the emergence of diffusion models (Sohl-Dickstein et al., 2015) and large pre-trained autoregressive models has led to the introduction of many new text-to-image models (Ramesh et al., 2021; Ding et al., 2021; Cho et al., 2020). These developments introduce multimodal transformer language models, which are proficient at learning the distribution of sequences of discrete image codes from given text inputs. Parti (Yu et al., 2022) introduces a sequence-to-sequence autoregressive model treating text-to-image synthesis as a translation task, converting text descriptions into visuals. Imagen (Saharia et al., 2022) employs a large language model for quality text features and introduces an Efficient U-Net for diffusion models. Latent Diffusion Models (LDM) (Rombach et al., 2022), such as Stable Diffusion (Rombach et al., 2021), employ diffusion processes within the latent space. This approach facilitates efficient model training on systems with constrained computational resources, without compromising the fidelity and quality. 2.2 Prompt Stealing Attack Prompt stealing attacks represent a domain closely related to our work. In such attacks, the objective of the attacker is typically to reconstruct the original prompt used in generating an image. However, our goal is to replicate AI-generated and natural images, where the resulting prompt does not necessarily need to match the original one precisely, word-byword. Although this area has seen limited exploration, there have been a few metods in the context of both Large Language Models (LLMs) (Zhang & Ippolito, 2023; Sha & Zhang, 2024) and Text-to-Image models (Shen et al., 2023).", + "pre_questions": [], + "main_content": "Introduction In recent years, image stocks and marketplaces have become increasingly important in the commercial and business sectors. Alongside traditional stock images, known for their high quality and compositions by expert photographers, a new trend has emerged in the form of marketplaces for AI-generated images, such as PromptBase,1 PromptSea,2 and Neutron Field.3 Unlike traditional stocks where the images themselves are traded, these innovative platforms trade in the prompts that lead to the creation of AI-generated images. Advanced text-to-image APIs like DALL-E 3 (Betker et al., 2023) and Midjourney,4 with their extraordinary ability to generate stunning visuals, are at the forefront of this trend. However, identifying the right prompts to produce such images is not a straightforward task (Cao et al., 2023; Oppenlaender, 2023), leading to the development of marketplaces where users can exchange their crafted prompts. With the growing demand for purchasing prompts from AI-generated sources, and the continued interest in traditional stock images, a pivotal question emerges: Can an adversary find a prompt and utilize current state-of-the-art text-to-image models to generate similar images at a lower cost? This question becomes particularly significant when considering that some of the natural images in traditional stocks are very expensive.5 This paper investigates this query, 1https://promptbase.com 2https://www.promptsea.io 3https://neutronfield.com 4https://www.midjourney.com 5https://tinyurl.com/28razsrn 1 arXiv:2404.13784v1 [cs.CR] 21 Apr 2024 User Prompt ... Keyword and Named Entity Extractor Extracted Modifiers Extracted Keywords and Named Entities Instructions Generated \u00a0Prompt Text-to-Image API GPT-4V Fine-tuned CLIP model Multi-label Classifier Targeted Image Closest Image and Text Embeddings Targeted API Image Embedding Image Embedding Figure 1: Overview of our attack. demonstrating how the latest multi-modal models with visual understanding capabilities can be harnessed for such attacks against these marketplaces. Our study exposes potential vulnerabilities in the digital imagery landscape and emphasizes the necessity for strategic countermeasures in response to these evolving challenges. In this paper, we first demonstrate that if an adversary is given images generated by one of the text-to-image APIs featured in AI-generated image marketplaces, they can, by accessing one of these APIs, find a prompt to generate similar images. Additionally, we show that the same attack methodology can be applied to generate images that closely resemble natural images offered in traditional stock markets. This dual capability of the attack strategy underscores its potential impact on both AI-generated and natural image domains. Can we simply utilize off-the-shelf image captioning models to recover the prompts used to generate an input image? Current image captioning models (Li et al., 2023; 2022a; Wang et al., 2022; Chen et al., 2022) often produce general descriptions that capture the broad aspects of an image but typically lack the specificity required for text-to-image API prompts like those used in Midjourney. The textual input prompts for these APIs need to be more specific and follow a particular format, usually by incorporating keywords and key phrases that significantly influence the generation (Oppenlaender, 2023). Directly using captions from standard image captioning models may not yield effective results since they often omit critical details and stylistic elements. While tools like CLIP Interrogator6 can suggest corresponding text and keywords, they may not provide enough detail and are especially limited when describing natural images. Similarly, although multimodal models like Gemini7 and GPT-4V can offer more elaborate descriptions, they might still miss out on essential keywords or named entities. To bridge the gaps presented by these tools, we introduce a three-component attack strategy: a fine-tuned CLIP model (Radford et al., 2021) on a large dataset of Midjourney generations, a multi-label classifier for related keywords and named entities, and GPT-4V for its ability to generate comprehensive prompts based on our instructions and information from the CLIP model and the classifier. We then implement a cyclic approach to refine these prompts, comparing the generated images with the original (ground-truth) image(s). The overview of our attack strategy is illustrated in Figure 1. As a significant contribution of our work, we have collected a large-scale dataset consisting of 19,137,140 generations from Midjourney\u2019s Discord server. This dataset, which pairs each prompt with its corresponding image or gridded image, aids in fine-tuning the CLIP model and training the multi-label classifier to identify related keywords and named entities. Our approach is validated through both automatic metrics and human evaluation, confirming 6https://huggingface.co/spaces/fffiloni/CLIP-Interrogator-2 7https://deepmind.google/technologies/gemini/#introduction 2 that our attack outperforms existing baselines. Additionally, we provide a cost analysis to justify the feasibility of the attack. Our cost estimation indicates that an attacker can generate a reasonably similar image to the targeted one at a cost of only $0.23 to $0.27. To summarize our contributions: 1. To the best of our knowledge, this work presents the first systematic investigation of potential attacks against images generated by two of the most popular text-to-image APIs, as well as against natural images from commercial stock collections. 2. We demonstrate that an attacker can generate images similar to those produced by AI, typically priced between $3 and $7, and natural stock images valued between $70 and $500 per image, all at a mere fraction of the cost \u2014 just a few cents. 3. A significant portion of our effort was dedicated to collecting and preprocessing a substantial dataset of Midjourney\u2019s generations. This dataset is not only pivotal to our project but also stands as a valuable resource for future research in this field. 3.1 Attacker\u2019s Capabilities. In our proposed attack, we consider an attack scenario that adheres to a realistic paradigm where the attacker is granted black-box access to all text-to-image APIs. Specifically, the attacker lacks access to the underlying model and its training data; their interaction is confined to providing input and receiving the corresponding output. A key assumption in our model is that the attacker possesses inference-time access to the API and also access to the generations of one of these APIs, namely Midjourney\u2019s Discord server. This is a 3 cute french bulldog in flower garden, illustration, by Nicoletta Ceccoli a painting of a dog sitting in a field of flowers Our Keyword and Named Entity Extractor ... french bulldog ... logo, logo design, white background, the sun rises between the mountains, no text, no lettering, no letters, 3d infographic a close up of a mountain with a sun in the background, paper cutouts of plain colors, by Petros Afshar, three-dimensional image, layered paper style Our Modifier Extractor ... ... logo logo design BLIP2 CLIP Interrogator Figure 2: Examples illustrating scenarios where the baseline models, BLIP2 and CLIP Interrogator, fail to accurately extract relevant keywords and modifiers from given prompts. . realistic assumption, as we have access to it. Furthermore, we extend our consideration to a more complicated scenario where the attacker gains access to an alternate API, distinct from the primary target, to facilitate the attack. Additionally, in the case of natural images, the attacker only has access to the targeted image(s) and nothing more. 3.2 Attacker\u2019s Objective. The attacker\u2019s goal in this scenario is to identify a prompt that can generate images similar to a given image or set of images, whether produced by a text-to-image API or featured in one of the commercial stock image collections. This objective necessitates that the attacker has the capability to send queries to the API to execute the attack. As previously noted, our investigation also includes scenarios where the attacker utilizes an alternative API, distinct from the primary target, to carry out the attack. 3.3 Attacker\u2019s Target. In our attack scenario, the main targets are two well-known text-to-image APIs: Midjourney and DALL-E 3. We aim to target the images generated by these APIs in order to find prompts that produce similar images. Additionally, for natural images, we focus on images from the Getty Images website,8 which represents one of the popular image stock sources. 4 Methodology Our approach to executing the attack centers around three primary components: keyword extraction, modifier extraction, and prompt generation. Each of these components plays a pivotal role in the overall effectiveness of the attack strategy, as detailed in the subsequent subsections. Keyword Extraction. As previously noted, image captioning models often lack the specificity required for text-to-image API prompts, typically omitting crucial details and stylistic elements needed for APIs like Midjourney. Adding to this challenge, a key limitation of these models is their inconsistent accuracy in extracting specific keywords, special names, or named entities depicted in the images. Although they capture some details, they often lack the precision needed for completely accurate prompt replication. This combination of generalization and lack of precision in keyword and entity extraction significantly hinders their effectiveness in accurately generating prompts for text-to-image generation. An example of BLIP2\u2019s failure to detect the keywords is shown in Figure 2. 8https://www.gettyimages.com/ 4 To address this challenge, we employ a substantial dataset obtained from Midjourney, consisting of a subset of 2 million pairs of prompts and corresponding upscaled images generated by Midjourney. We then fine-tune a CLIP model on this dataset. The CLIP model, initially pre-trained on 2 billion pairs of captions and images, many of which are not precisely aligned, requires fine-tuning on the Midjourney subset to better align with the nature of the prompts and images in our study. The fine-tuned CLIP model is then utilized to generate a set of 5 million text and corresponding image embeddings. During the inference phase, we use the fine-tuned CLIP model to obtain image embeddings of the targeted image(s) and identify the closest text and image embeddings. From these, we extract a list of named entities and keywords from the most closely related prompts and the corresponding prompts of the nearest images. Finally, this information, along with other relevant data, is fed into GPT-4V. Modifier Extraction. Besides keywords and named entities, incorporating adjectives, adverbs, and specific styles, referred to as modifiers, can significantly enhance the quality of generations (Oppenlaender, 2023). However, image captioning models typically are unable to detect these modifiers, failing to recognize and extract them accurately. Additionally, the CLIP Interrogator also does not consistently perform well in detecting these modifiers. An example of the CLIP Interrogator\u2019s failure to detect modifiers is illustrated in Figure 2. To overcome this, we train a Multi-Layer Perceptron (MLP) as a multi-label classifier on image embeddings. After identifying 1800 frequent modifiers from our Midjourney dataset, we train the multi-label classifier on a selected subset of around 300, 000 samples. During inference, we determine the top modifiers with the highest probability scores. These modifiers, along with other pertinent information, are then provided to GPT-4V. Prompt Generation. The final step involves using a model with strong visual understanding capabilities for detailed image description. For this, we select GPT-4V, a multi-modal model known for its exceptional visual understanding. However, GPT-4V alone may not always suffice. To bridge this gap, we complement it with detailed instructions and information derived from the other two components. Our comprehensive instructions to GPT-4V, called initial instructions, include a detailed task description, relevant modifiers, general keywords, named entities, and an example prompt from Midjourney samples. While the fine-tuned CLIP model and the multi-label classifier provide relevant information, GPT-4V acts as an additional classifier, selecting the most appropriate modifiers, keywords, and named entities. This leads to a two-level extraction process. All the related information, along with the targeted images, is fed into GPT-4V to generate the prompt. Refinement of the Initial Prompt. The initial prompt generated by GPT-4V might not always capture every detail in the image. To enhance the prompt quality, we implement a cyclical refinement process. This involves devising a new set of instructions for GPT-4V, including elements like a detailed task description, relevant modifiers, keywords, named entities, an example from Midjourney, the targeted image(s), the prompt used in the previous round of the cycle, and the corresponding generated image(s). The task description also emphasizes comparing the generated image(s) with the targeted ones, refining the prompt based on differences in styles, themes, or elements. This iterative process continues over multiple rounds to progressively refine the prompt\u2019s accuracy and relevance. The initial and refining prompts are displayed in Table 5 and Table 6, respectively. The overview of the attack is presented in Figure 1. 5 Experimental Settings and Results 5.1 Midjourney Dataset As previously mentioned, Midjourney stands as a frontier text-to-image API, known for generating high-quality and realistic images. Users interact with Midjourney\u2019s bot on their official Discord server to input prompts and generate images, making these generations accessible to anyone who joins the server. Acknowledging the importance of this API, we 5 Model Top-1 Accuracy Top-5 Accuracy Top-10 Accuracy Original CLIP model 0.802 0.915 0.9395 Fine-tuned on 200K samples 0.8065 0.9413 0.967 Fine-tuned on 500K samples 0.9003 0.9653 0.9783 Fine-tuned on 2M samples 0.9167 0.9762 0.9857 Table 1: Comparative metrics of the original CLIP and fine-tuned models on varying dataset sizes, highlighting improvements in Top-1, Top-5, and Top-10 accuracy. collect millions of samples from Midjourney\u2019s Discord channels. This substantial dataset is crucial for our attack, providing a wide array of prompts and corresponding images. Details about the dataset and its pre-processing can be found in the Appendix A. 5.2 Fine-Tuning the CLIP Model Model and Hyperparameter Settings. For the task of fine-tuning, we select the CLIP ViT-G/14 model (Ilharco et al., 2021), which is pre-trained on a vast collection of 2 billion samples from the LAION dataset (Schuhmann et al., 2022). We pick this model variant as it shows better performance in ImageNet zero-shot classification tasks.9 Through iterative experimentation, we optimize the hyperparameters, setting a learning rate (lr) of 1 \u00d7 10\u22125, a batch size of 32, and gradient accumulation steps to 2. The model is fine-tuned for 10 epochs on a cluster with 4 A100 GPUs, ensuring efficient computation and optimal convergence. Dataset. Resource constraints guide our decision to fine-tune the CLIP model on a subset of the Midjourney dataset. After experimenting with various sample sizes, we settle on a subset comprising 2 million samples from Midjourney. The results of fine-tuning the model with different dataset sizes are detailed in Table 1. Metric and Evaluation Results. To evaluate the fine-tuned CLIP models, we use a test set of 10,000 samples, employing CLIP\u2019s image and text encoders to generate embeddings. We calculate cosine similarity for each test image to identify the closest corresponding text, measuring the model\u2019s accuracy by whether the ground-truth text is within the Top-1, Top-5, or Top-10 closest texts. For example, a Top-5 accuracy of 90% indicates that the groundtruth text ranks among the top-5 closest texts for 90% of the samples. Table 1 presents the evaluation metrics for both the original and the fine-tuned CLIP models across different dataset sizes. The fine-tuned model significantly outperforms the original in this specific data type. Particularly, the model fine-tuned on 2 million samples demonstrates the best performance, although the improvement is not substantially higher compared to other dataset sizes. 5.3 Training Multi-Label Classifier 0 5 10 15 20 25 k Value 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Metric Value Precision and Recall Trends over k Values Precision Recall Figure 3: Overview of the multi-label classifier\u2019s performance for different values of k. Model and Hyperparameter Settings. We utilize a Multi-Layer Perceptron (MLP) architecture with three hidden layers for this task. The ReLU activation function is employed, along with a learning rate of 0.001. To mitigate overfitting, we incorporate a dropout rate of 0.3. The training is conducted over 30 epochs using an RTX 8000 GPU. Dataset. Upon analyzing the Midjourney dataset, we identify approximately 1800 distinct modifiers. From this, we select around 9https://github.com/mlfoundations/open_clip 6 Setting BLIP2 CLIP Interrogator Our Attack CLIP-S HE CLIP-S HE CLIP-S HE Midjourney (multiple images) 0.8477 3.4 0.8874 3.88 0.8946 4.36 Midjourney (single image) 0.7483 3.6 0.8437 4.04 0.8844 4.52 DALL-E 3 (multiple images) 0.8534 4.16 0.8569 3.68 0.8883 4.56 DALL-E 3 (single image) 0.7932 2.8 0.8007 2.84 0.896 4.08 Cross-setting (multiple images) 0.798 3.8 0.8169 4.04 0.859 4.4 Cross-setting (single image) 0.7911 3.04 0.8215 3.28 0.886 4.08 Natural images (single image) 0.7516 3.8 0.748 3.92 0.8031 4.36 Table 2: Comparison of different methods using CLIP-S (CLIP-score) and HE (Human Evaluation) metrics across various settings. 360, 000 samples, labeling them based on the presence of these modifiers. We allocate 10% of this dataset as a validation set to monitor and adjust model performance. Detailed information about the process of extracting these modifiers can be found in the Appendix A.5. Evaluation Metrics and Results. For this task, we utilize precision and recall as the primary metrics. During inference, the top k modifiers are selected based on their scores from the classifier, where precision indicates the accuracy of selected modifiers and recall assesses the proportion of correctly predicted ground-truth modifiers. Figure 3 displays precision and recall values for different k thresholds, highlighting a trade-off: increasing k lowers precision, reducing the proportion of correctly selected modifiers, but enhances recall, capturing more correct modifiers. Though higher recall is preferable for comprehensive modifier coverage, avoiding excessive inclusion is essential. To balance these metrics, we choose k = 20 for our overall attack evaluation. 5.4 Overall Attack Evaluation Experimental Settings. We evaluate two major text-to-image APIs, Midjourney and DALLE 3, for AI-generated images and Getty Images website for natural images. Our scenarios include both multiple images from a single prompt and single image cases for Midjourney and DALL-E 3. We also explore an adversary using a different API, with Midjourney as the target and DALL-E 3 as the alternate, across multiple and single image variations. This results in seven distinct settings, with 150 samples for setting 1, 50 samples each for settings 2-4, and 30 samples for settings 5-7. Model and Hyperparameters. In scenarios 1, 2, and 7, we utilize Midjourney as the textto-image model for generating images. In scenarios 3-6, DALL-E 3 is employed for image generation, with the output size set to 1024 \u00d7 1024 and standard quality. Across all scenarios, GPT-4V serves as the multimodal model for visual understanding and prompt generation. Dataset. For scenarios involving Midjourney-generated images, a small subset of the Midjourney dataset, not used in CLIP model fine-tuning or multi-label classifier training, is selected. In scenarios where DALL-E 3 generates the targeted images, we use a subset of Midjourney prompts to generate corresponding images, forming the evaluation dataset. For scenarios involving natural images, we select diverse samples from Getty Images website. Metric. To assess the similarity between the images generated by our approach and the ground-truth images, we employ both automated metrics and human evaluation. The automated metric, termed Clip-score (Hessel et al., 2021), involves calculating cosine similarity using the image embeddings from the original CLIP model provided by OpenAI. 7 Setting Original Image Our Attack CLIP Interrogator BLIP2 Midjourney (Setting 2) DALL-E 3 (Setting 4) Cross-setting (Setting 6) Figure 4: Comparison of the original image with those generated by our method, BLIP2, and CLIP Interrogator for three different settings. Additionally, for human evaluation, we select five random samples from each setting (35 samples in total) and ask five annotators to rate these images. They use a 5-point Likert scale to score the images based on perceived similarity. More details about the human evaluation process are presented in Appendix B. Baselines. Our evaluation includes BLIP2, a recent image captioning model, and CLIP Interrogator 2, as baselines. For scenarios involving a single image, we use the textual output provided by these baselines directly. In settings with multiple images, to ensure a fair comparison, we first process each image through the baselines and then select the one with the highest similarity score. Evaluation Results. The results for all seven settings are presented in Table 2. Across these settings, our approach consistently outperforms the baselines. The margin of superiority is substantial in most settings, except for the first, where CLIP Interrogator shows close performance based on CLIP-score. It\u2019s observed that the effectiveness of all methods diminishes in scenarios 5 and 6, likely due to the use of a different API by the attacker. Furthermore, the results show improved performance in scenarios involving multiple images compared to those with a single image, likely because multiple images provide more comprehensive information for analysis. Some examples of generated images by our approach and other baselines are presented in Figure 4. 5.5 Natural Images: DALL-E 3 vs Midjourney Text-toImage API BLIP2 CLIP Interrogator Our Attack DALL-E 3 0.7509 0.7044 0.8284 Midjourney 0.7516 0.7480 0.8132 Table 3: Image similarity scores for our attack on natural images using different text-toimage APIs, compared with baseline models. In scenarios where the target imagery comprises natural scenes from commercial stock providers, our investigation encompassed the use of both DALL-E 3 and Midjourney as the underlying text-to-image APIs. As delineated in Table 3, DALL-E 3 and Midjourney exhibit comparable performance in terms of image similarity metrics. However, a qualitative assessment reveals that Midjourney\u2019s outputs are often more lifelike and convincing. This observation is consistent with the platform\u2019s design philosophy, which emphasizes artistic expression and photorealism. An illustrative example 8 Text-to-Image API Original Image BLIP2 CLIP Interrogator Our Attack DALL-E 3 Midjourney DALL-E 3 Midjourney DALL-E 3 Midjourney Figure 5: Comparison of natural images from two different Text-to-Image APIs with those generated by our method, BLIP2, and CLIP Interrogator. contrasting the outputs from both APIs, with respect to a natural image from a commercial stock collection, is provided in Figure 5. 6 Limitations Like any other method, our approach may encounter failure cases due to the complexity of its multi-component pipeline. Potential reasons for these failures include: 1. The fine-tuned CLIP model might not retrieve related keywords if the words are rare, or the model itself may fail to identify the closest texts to the images. 2. The multi-label classifier might not select appropriate modifiers. 3. GPT-4V might not accurately extract related keywords, named entities, and modifiers, or it might fail to generate an appropriate prompt. 4. The text-to-image models might not produce images that align well with the text. The inherent uncertainty in both multimodal LLMs and text-to-image models adds complexity to the task. We have included some examples from setting 1, which yielded the lowest image similarity scores, to analyze what contributed to the suboptimal results. These examples and their interpretations can be found in the Appendix C. The multi-component nature of our attack pipeline presents numerous opportunities for refinement. Each component, 9 Original Image Initial Prompt Round 1 Round 2 Round 3 Figure 6: Examples of natural images where our attack method achieves an acceptable approximation of the original model with only 1-2 rounds of refinement. from the fine-tuning of the CLIP model and the training regimen of the multi-label classifier to the optimization of modifiers extracted from data, holds the potential to elevate the attack\u2019s effectiveness. Moreover, the art of prompting GPT-4V is not monolithic; alternative prompt constructions could yield improved results within our cyclic approach. While this paper establishes a solid foundation, the practical enhancements of each component present as promising directions for future work, offering prospects for even more complex attacks. 7 Cost Estimation To justify the practicality of our attack, we provide an estimation of its associated costs. These are mainly divided into two main components: the cost of using GPT-4V and the cost associated with the Text-to-Image API. We detail each as follows: GPT-4V Cost. According to OpenAI\u2019s pricing, there is a charge per million tokens for both input and output. To estimate the cost for using GPT-4V, we consider the number of input and output tokens required per round of our attack. Based on our analysis, the average number of input tokens is 900 for the initial prompt and 1165 for each subsequent round. The average number of rounds to achieve maximum similarity is approximately 2.7, leading to a total of 4395 input tokens per test sample. It\u2019s noteworthy that the examples from natural images shown in Figure 6 demonstrate that even with 1-2 rounds, we can achieve an approximation of the targeted image. For output tokens, the average is 381 per round. Given the current rates on OpenAI\u2019s website, the total cost for using GPT-4V, considering both input and output, is approximately $0.09. Additionally, a small fee of approximately $0.03 applies for including images in queries, based on the average number of rounds. Text-to-Image API Cost. Our experiments utilize either Midjourney or DALL-E 3, so we calculate the cost for each separately. Based on the rates from Midjourney\u2019s and OpenAI\u2019s websites, the cost per image generation is about $0.03 for Midjourney and $0.04 for DALL-E 3 (for 1024*1024 resolution images). Consequently, the total generation cost using Midjourney is approximately $0.12, and for DALL-E 3, it\u2019s around $0.16. Overall Cost. The total cost per sample depends on the text-to-image API used. If the attacker utilizes Midjourney, the cost is approximately $0.235 per sample, and for DALL-E 3, it is around $0.275. These costs are substantially lower than the typical prices in AIgenerated image marketplaces (ranging from $3 to $7) and significantly below the cost of stocks of real images (which can be $50 to $500). 10 8 Conclusion This research has unveiled a novel attack strategy in the realm of digital imagery, specifically targeting AI-generated image marketplaces and premium stock image providers. By effectively employing state-of-the-art multi-modal models, our method has demonstrated the ability to generate visually comparable content at a significantly reduced cost, challenging the current economic dynamics of the digital imagery landscape. Moreover, the compilation of a large-scale dataset from Midjourney is a pivotal contribution to this field. This dataset not only aids our research but also serves as a valuable resource for future studies, offering insights into the capabilities and challenges of text-to-image technologies. In conclusion, our study highlights new threats against digital imagery, stressing the need for urgent action in identifying and countering these risks. Future research should aim at developing protective measures to preserve digital image integrity amidst advancing AI technologies." + }, + { + "url": "http://arxiv.org/abs/2302.09923v2", + "title": "Prompt Stealing Attacks Against Text-to-Image Generation Models", + "abstract": "Text-to-Image generation models have revolutionized the artwork design\nprocess and enabled anyone to create high-quality images by entering text\ndescriptions called prompts. Creating a high-quality prompt that consists of a\nsubject and several modifiers can be time-consuming and costly. In consequence,\na trend of trading high-quality prompts on specialized marketplaces has\nemerged. In this paper, we perform the first study on understanding the threat\nof a novel attack, namely prompt stealing attack, which aims to steal prompts\nfrom generated images by text-to-image generation models. Successful prompt\nstealing attacks directly violate the intellectual property of prompt engineers\nand jeopardize the business model of prompt marketplaces. We first perform a\nsystematic analysis on a dataset collected by ourselves and show that a\nsuccessful prompt stealing attack should consider a prompt's subject as well as\nits modifiers. Based on this observation, we propose a simple yet effective\nprompt stealing attack, PromptStealer. It consists of two modules: a subject\ngenerator trained to infer the subject and a modifier detector for identifying\nthe modifiers within the generated image. Experimental results demonstrate that\nPromptStealer is superior over three baseline methods, both quantitatively and\nqualitatively. We also make some initial attempts to defend PromptStealer. In\ngeneral, our study uncovers a new attack vector within the ecosystem\nestablished by the popular text-to-image generation models. We hope our results\ncan contribute to understanding and mitigating this emerging threat.", + "authors": "Xinyue Shen, Yiting Qu, Michael Backes, Yang Zhang", + "published": "2023-02-20", + "updated": "2024-04-15", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2009.11278v1", + "title": "X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers", + "abstract": "Mirroring the success of masked language models, vision-and-language\ncounterparts like ViLBERT, LXMERT and UNITER have achieved state of the art\nperformance on a variety of multimodal discriminative tasks like visual\nquestion answering and visual grounding. Recent work has also successfully\nadapted such models towards the generative task of image captioning. This begs\nthe question: Can these models go the other way and generate images from pieces\nof text? Our analysis of a popular representative from this model family -\nLXMERT - finds that it is unable to generate rich and semantically meaningful\nimagery with its current training setup. We introduce X-LXMERT, an extension to\nLXMERT with training refinements including: discretizing visual\nrepresentations, using uniform masking with a large range of masking ratios and\naligning the right pre-training datasets to the right objectives which enables\nit to paint. X-LXMERT's image generation capabilities rival state of the art\ngenerative models while its question answering and captioning abilities remains\ncomparable to LXMERT. Finally, we demonstrate the generality of these training\nrefinements by adding image generation capabilities into UNITER to produce\nX-UNITER.", + "authors": "Jaemin Cho, Jiasen Lu, Dustin Schwenk, Hannaneh Hajishirzi, Aniruddha Kembhavi", + "published": "2020-09-23", + "updated": "2020-09-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2102.12092v2", + "title": "Zero-Shot Text-to-Image Generation", + "abstract": "Text-to-image generation has traditionally focused on finding better modeling\nassumptions for training on a fixed dataset. These assumptions might involve\ncomplex architectures, auxiliary losses, or side information such as object\npart labels or segmentation masks supplied during training. We describe a\nsimple approach for this task based on a transformer that autoregressively\nmodels the text and image tokens as a single stream of data. With sufficient\ndata and scale, our approach is competitive with previous domain-specific\nmodels when evaluated in a zero-shot fashion.", + "authors": "Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever", + "published": "2021-02-24", + "updated": "2021-02-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1503.03585v8", + "title": "Deep Unsupervised Learning using Nonequilibrium Thermodynamics", + "abstract": "A central problem in machine learning involves modeling complex data-sets\nusing highly flexible families of probability distributions in which learning,\nsampling, inference, and evaluation are still analytically or computationally\ntractable. Here, we develop an approach that simultaneously achieves both\nflexibility and tractability. The essential idea, inspired by non-equilibrium\nstatistical physics, is to systematically and slowly destroy structure in a\ndata distribution through an iterative forward diffusion process. We then learn\na reverse diffusion process that restores structure in data, yielding a highly\nflexible and tractable generative model of the data. This approach allows us to\nrapidly learn, sample from, and evaluate probabilities in deep generative\nmodels with thousands of layers or time steps, as well as to compute\nconditional and posterior probabilities under the learned model. We\nadditionally release an open source reference implementation of the algorithm.", + "authors": "Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli", + "published": "2015-03-12", + "updated": "2015-11-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cond-mat.dis-nn", + "q-bio.NC", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.10752v2", + "title": "High-Resolution Image Synthesis with Latent Diffusion Models", + "abstract": "By decomposing the image formation process into a sequential application of\ndenoising autoencoders, diffusion models (DMs) achieve state-of-the-art\nsynthesis results on image data and beyond. Additionally, their formulation\nallows for a guiding mechanism to control the image generation process without\nretraining. However, since these models typically operate directly in pixel\nspace, optimization of powerful DMs often consumes hundreds of GPU days and\ninference is expensive due to sequential evaluations. To enable DM training on\nlimited computational resources while retaining their quality and flexibility,\nwe apply them in the latent space of powerful pretrained autoencoders. In\ncontrast to previous work, training diffusion models on such a representation\nallows for the first time to reach a near-optimal point between complexity\nreduction and detail preservation, greatly boosting visual fidelity. By\nintroducing cross-attention layers into the model architecture, we turn\ndiffusion models into powerful and flexible generators for general conditioning\ninputs such as text or bounding boxes and high-resolution synthesis becomes\npossible in a convolutional manner. Our latent diffusion models (LDMs) achieve\na new state of the art for image inpainting and highly competitive performance\non various tasks, including unconditional image generation, semantic scene\nsynthesis, and super-resolution, while significantly reducing computational\nrequirements compared to pixel-based DMs. Code is available at\nhttps://github.com/CompVis/latent-diffusion .", + "authors": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Bj\u00f6rn Ommer", + "published": "2021-12-20", + "updated": "2022-04-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.11487v1", + "title": "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding", + "abstract": "We present Imagen, a text-to-image diffusion model with an unprecedented\ndegree of photorealism and a deep level of language understanding. Imagen\nbuilds on the power of large transformer language models in understanding text\nand hinges on the strength of diffusion models in high-fidelity image\ngeneration. Our key discovery is that generic large language models (e.g. T5),\npretrained on text-only corpora, are surprisingly effective at encoding text\nfor image synthesis: increasing the size of the language model in Imagen boosts\nboth sample fidelity and image-text alignment much more than increasing the\nsize of the image diffusion model. Imagen achieves a new state-of-the-art FID\nscore of 7.27 on the COCO dataset, without ever training on COCO, and human\nraters find Imagen samples to be on par with the COCO data itself in image-text\nalignment. To assess text-to-image models in greater depth, we introduce\nDrawBench, a comprehensive and challenging benchmark for text-to-image models.\nWith DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP,\nLatent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen\nover other models in side-by-side comparisons, both in terms of sample quality\nand image-text alignment. See https://imagen.research.google/ for an overview\nof the results.", + "authors": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, Mohammad Norouzi", + "published": "2022-05-23", + "updated": "2022-05-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1711.10485v1", + "title": "AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks", + "abstract": "In this paper, we propose an Attentional Generative Adversarial Network\n(AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained\ntext-to-image generation. With a novel attentional generative network, the\nAttnGAN can synthesize fine-grained details at different subregions of the\nimage by paying attentions to the relevant words in the natural language\ndescription. In addition, a deep attentional multimodal similarity model is\nproposed to compute a fine-grained image-text matching loss for training the\ngenerator. The proposed AttnGAN significantly outperforms the previous state of\nthe art, boosting the best reported inception score by 14.14% on the CUB\ndataset and 170.25% on the more challenging COCO dataset. A detailed analysis\nis also performed by visualizing the attention layers of the AttnGAN. It for\nthe first time shows that the layered attentional GAN is able to automatically\nselect the condition at the word level for generating different parts of the\nimage.", + "authors": "Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, Xiaodong He", + "published": "2017-11-28", + "updated": "2017-11-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2105.13290v3", + "title": "CogView: Mastering Text-to-Image Generation via Transformers", + "abstract": "Text-to-Image generation in the general domain has long been an open problem,\nwhich requires both a powerful generative model and cross-modal understanding.\nWe propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to\nadvance this problem. We also demonstrate the finetuning strategies for various\ndownstream tasks, e.g. style learning, super-resolution, text-image ranking and\nfashion design, and methods to stabilize pretraining, e.g. eliminating NaN\nlosses. CogView achieves the state-of-the-art FID on the blurred MS COCO\ndataset, outperforming previous GAN-based models and a recent similar work\nDALL-E.", + "authors": "Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, Jie Tang", + "published": "2021-05-26", + "updated": "2021-11-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1605.05396v2", + "title": "Generative Adversarial Text to Image Synthesis", + "abstract": "Automatic synthesis of realistic images from text would be interesting and\nuseful, but current AI systems are still far from this goal. However, in recent\nyears generic and powerful recurrent neural network architectures have been\ndeveloped to learn discriminative text feature representations. Meanwhile, deep\nconvolutional generative adversarial networks (GANs) have begun to generate\nhighly compelling images of specific categories, such as faces, album covers,\nand room interiors. In this work, we develop a novel deep architecture and GAN\nformulation to effectively bridge these advances in text and image model- ing,\ntranslating visual concepts from characters to pixels. We demonstrate the\ncapability of our model to generate plausible images of birds and flowers from\ndetailed text descriptions.", + "authors": "Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee", + "published": "2016-05-17", + "updated": "2016-06-05", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.10752v2", + "title": "High-Resolution Image Synthesis with Latent Diffusion Models", + "abstract": "By decomposing the image formation process into a sequential application of\ndenoising autoencoders, diffusion models (DMs) achieve state-of-the-art\nsynthesis results on image data and beyond. Additionally, their formulation\nallows for a guiding mechanism to control the image generation process without\nretraining. However, since these models typically operate directly in pixel\nspace, optimization of powerful DMs often consumes hundreds of GPU days and\ninference is expensive due to sequential evaluations. To enable DM training on\nlimited computational resources while retaining their quality and flexibility,\nwe apply them in the latent space of powerful pretrained autoencoders. In\ncontrast to previous work, training diffusion models on such a representation\nallows for the first time to reach a near-optimal point between complexity\nreduction and detail preservation, greatly boosting visual fidelity. By\nintroducing cross-attention layers into the model architecture, we turn\ndiffusion models into powerful and flexible generators for general conditioning\ninputs such as text or bounding boxes and high-resolution synthesis becomes\npossible in a convolutional manner. Our latent diffusion models (LDMs) achieve\na new state of the art for image inpainting and highly competitive performance\non various tasks, including unconditional image generation, semantic scene\nsynthesis, and super-resolution, while significantly reducing computational\nrequirements compared to pixel-based DMs. Code is available at\nhttps://github.com/CompVis/latent-diffusion .", + "authors": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Bj\u00f6rn Ommer", + "published": "2021-12-20", + "updated": "2022-04-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.12959v1", + "title": "Prompt Stealing Attacks Against Large Language Models", + "abstract": "The increasing reliance on large language models (LLMs) such as ChatGPT in\nvarious fields emphasizes the importance of ``prompt engineering,'' a\ntechnology to improve the quality of model outputs. With companies investing\nsignificantly in expert prompt engineers and educational resources rising to\nmeet market demand, designing high-quality prompts has become an intriguing\nchallenge. In this paper, we propose a novel attack against LLMs, named prompt\nstealing attacks. Our proposed prompt stealing attack aims to steal these\nwell-designed prompts based on the generated answers. The prompt stealing\nattack contains two primary modules: the parameter extractor and the prompt\nreconstruction. The goal of the parameter extractor is to figure out the\nproperties of the original prompts. We first observe that most prompts fall\ninto one of three categories: direct prompt, role-based prompt, and in-context\nprompt. Our parameter extractor first tries to distinguish the type of prompts\nbased on the generated answers. Then, it can further predict which role or how\nmany contexts are used based on the types of prompts. Following the parameter\nextractor, the prompt reconstructor can be used to reconstruct the original\nprompts based on the generated answers and the extracted features. The final\ngoal of the prompt reconstructor is to generate the reversed prompts, which are\nsimilar to the original prompts. Our experimental results show the remarkable\nperformance of our proposed attacks. Our proposed attacks add a new dimension\nto the study of prompt engineering and call for more attention to the security\nissues on LLMs.", + "authors": "Zeyang Sha, Yang Zhang", + "published": "2024-02-20", + "updated": "2024-02-20", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.10789v1", + "title": "Scaling Autoregressive Models for Content-Rich Text-to-Image Generation", + "abstract": "We present the Pathways Autoregressive Text-to-Image (Parti) model, which\ngenerates high-fidelity photorealistic images and supports content-rich\nsynthesis involving complex compositions and world knowledge. Parti treats\ntext-to-image generation as a sequence-to-sequence modeling problem, akin to\nmachine translation, with sequences of image tokens as the target outputs\nrather than text tokens in another language. This strategy can naturally tap\ninto the rich body of prior work on large language models, which have seen\ncontinued advances in capabilities and performance through scaling data and\nmodel sizes. Our approach is simple: First, Parti uses a Transformer-based\nimage tokenizer, ViT-VQGAN, to encode images as sequences of discrete tokens.\nSecond, we achieve consistent quality improvements by scaling the\nencoder-decoder Transformer model up to 20B parameters, with a new\nstate-of-the-art zero-shot FID score of 7.23 and finetuned FID score of 3.22 on\nMS-COCO. Our detailed analysis on Localized Narratives as well as PartiPrompts\n(P2), a new holistic benchmark of over 1600 English prompts, demonstrate the\neffectiveness of Parti across a wide variety of categories and difficulty\naspects. We also explore and highlight limitations of our models in order to\ndefine and exemplify key areas of focus for further improvements. See\nhttps://parti.research.google/ for high-resolution images.", + "authors": "Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, Yonghui Wu", + "published": "2022-06-22", + "updated": "2022-06-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.15228v1", + "title": "Re-Thinking Inverse Graphics With Large Language Models", + "abstract": "Inverse graphics -- the task of inverting an image into physical variables\nthat, when rendered, enable reproduction of the observed scene -- is a\nfundamental challenge in computer vision and graphics. Disentangling an image\ninto its constituent elements, such as the shape, color, and material\nproperties of the objects of the 3D scene that produced it, requires a\ncomprehensive understanding of the environment. This requirement limits the\nability of existing carefully engineered approaches to generalize across\ndomains. Inspired by the zero-shot ability of large language models (LLMs) to\ngeneralize to novel contexts, we investigate the possibility of leveraging the\nbroad world knowledge encoded in such models in solving inverse-graphics\nproblems. To this end, we propose the Inverse-Graphics Large Language Model\n(IG-LLM), an inverse-graphics framework centered around an LLM, that\nautoregressively decodes a visual embedding into a structured, compositional\n3D-scene representation. We incorporate a frozen pre-trained visual encoder and\na continuous numeric head to enable end-to-end training. Through our\ninvestigation, we demonstrate the potential of LLMs to facilitate inverse\ngraphics through next-token prediction, without the use of image-space\nsupervision. Our analysis opens up new possibilities for precise spatial\nreasoning about images that exploit the visual knowledge of LLMs. We will\nrelease our code and data to ensure the reproducibility of our investigation\nand to facilitate future research at https://ig-llm.is.tue.mpg.de/", + "authors": "Peter Kulits, Haiwen Feng, Weiyang Liu, Victoria Abrevaya, Michael J. Black", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Visual Program Induction. Visual program induction is a subfield of program synthesis that is focused on recovering a graphics program from a given visual target (Gulwani et al., 2017). Graphics programs, also known as procedural or symbolic programs, offer a concise, structured, and interpretable representation for scenes and have garnered significant attention in the field \u2013 see Ritchie et al. (2023) for an in-depth overview. Commonly employed program types include constructive-solid geometry (CSG) (Du et al., 2018; Kania et al., 2020; Ren et al., 2022; Sharma et al., 2018a; Yu et al., 2022), computer-aided design (CAD) Ganin et al. (2018); Li et al. (2020a; 2022a); Seff et al. (2022); Xu et al. (2021), vector graphics (e.g., SVG) (Reddy et al., 2021a;b), and L-systems (Guo et al., 2020), as well as custom program domains (Ellis et al., 2018; Tian et al., 2019; Deng et al., 2022; Hu et al., 2022b). Program discovery can be achieved from simplified representations of the same modality, such as 2D hand drawings or synthetic patterns (\u0160t\u2019ava et al., 2010; 2014; Sharma et al., 2018a; Ellis et al., 2019; Riso et al., 2022; Ganeshan et al., 2023; Seff et al., 2022), as well as 3D meshes and voxels (Ganeshan et al., 2023; Jones et al., 2022; Tian et al., 2019; Bokeloh et al., 2010; Willis et al., 2021; Sharma et al., 2018a; Ellis et al., 2019). There have also been efforts in recovering 3D scenes from natural 2D images using graphics programs (Mansinghka et al., 2013; Kulkarni et al., 2015; Wu et al., 2017; Yi et al., 2018; Mao et al., 2019; Liu et al., 2019; Li et al., 2020b; Gothoskar et al., 2021; Ganin et al., 2018; Kar et al., 2019; Devaranjan et al., 2020). Kulkarni et al. (2015) propose a probabilistic programming language for representing arbitrary 2D/3D scenes, demonstrating preliminary results for analysis of faces, bodies, and objects. Wu et al. (2017) infer custom-designed markup code from images that can be easily translated to renderer-friendly inputs. The work can handle scenes with a number of objects, but cannot generalize beyond the training data distribution. In follow-up work, Yi et al. (2018) investigate how graphics programs can be used for visual question answering (VQA). However, their scene reconstruction also struggles with generalization problems, particularly to unseen attribute combinations. Liu et al. (2019) present a new language for representing scenes, along with a hierarchical approach for inference. Meta-SIM (Kar et al., 2019; Devaranjan et al., 2020) uses probabilistic scene grammars to recover synthetic scenes from natural images, which are then used to train a generative scene-synthesis model. Despite promising results, such methods require special training data and complex modular architectures, and are difficult to generalize beyond their training distribution. Vision as Inverse Graphics. Dating back to Larry Roberts\u2019s Blocks-World thesis (Roberts, 1963), there has been a long history of work that treats computer vision as the inverse problem to computer graphics. Efforts have included estimating object pose (Lepetit et al., 2009; Tejani et al., 2014; Pavlakos et al., 2017; Xiang et al., 2018; Wang et al., 2021b; 2019; 2021a; Ma et al., 2022; Labb\u00e9 et al., 2020) and reconstructing 3 shape (Choy et al., 2016; Fan et al., 2017; Groueix et al., 2018; Mescheder et al., 2019; Wang et al., 2018; Sitzmann et al., 2019; Park et al., 2019) from single images. Multi-object scenes have also been recovered via geometric approaches (Gkioxari et al., 2019; Denninger & Triebel, 2020; Shin et al., 2019), but they often overlook the semantics and interrelation between objects, hindering further reasoning. Holistic 3Dscene understanding takes the approach a step further by reconstructing individual objects together with the scene layout. Early methods focus on estimating 3D bounding-box representations (Hedau et al., 2009; Lee et al., 2009; Mallya & Lazebnik, 2015; Ren et al., 2017; Dasgupta et al., 2016), whereas more-recent works emphasize the reconstruction of finer shapes (Zhang et al., 2021; Liu et al., 2022; Gkioxari et al., 2022) along with instance segmentations (Kundu et al., 2022; Yao et al., 2018; Dahnert et al., 2021; Nie et al., 2020; Zhang et al., 2023b; Nie et al., 2023). Closely related are methods that perform CAD or mesh-model retrieval followed by 6-DOF pose estimation of individual objects (Aubry et al., 2014; Bansal et al., 2016; Lim et al., 2014; Tulsiani & Malik, 2015) or scenes (Izadinia et al., 2017; Huang et al., 2018; Salas-Moreno et al., 2013; Kundu et al., 2018; G\u00fcmeli et al., 2022; Kuo et al., 2020; 2021; Engelmann et al., 2021). An alternative to detailed shape reconstruction is primitive reconstruction, where objects or scenes are explained by a limited set of geometric primitives, offering a higher level of abstraction. This direction has been studied extensively (Roberts, 1963; Binford, 1975; Hedau et al., 2009; Gupta et al., 2010a;b), and it is still actively researched (van den Hengel et al., 2015; Tulsiani et al., 2017; Paschalidou et al., 2019; 2020; 2021; Deng et al., 2020; Kluger et al., 2021; Monnier et al., 2023; Vavilala & Forsyth, 2023). While these works typically produce accurate reconstructions, they involve complex pipelines with multiple modules and require special training data, limiting generalization under distribution shifts. In contrast, we explore the use of LLMs as a potentially simpler and more-generalizable solution to the inverse-graphics problem. LLMs and 3D Understanding. Recent and concurrent efforts have explored the use of LLMs for 3Drelated tasks, including 3D question answering (Hong et al., 2023; Dwedari et al., 2023), navigation (Hong et al., 2023; Zhang et al., 2023a), text-to-3D (Sun et al., 2023), procedural model editing (Kodnongbua et al., 2023), and multi-modal representation learning (Xue et al., 2023; Hong et al., 2023). To the best of our knowledge, our work is the first to investigate the application of LLMs to inverse-graphics tasks.", + "pre_questions": [], + "main_content": "Introduction The formulation of vision as \u201cinverse graphics\u201d traces its roots back at least to Baumgart (1974) (see also Kersten & Yuille (1996) and Yuille & Kersten (2006)). While the term encompasses various ideas and approaches to vision problems, it is often equated with \u201canalysis by synthesis\u201d (Grenander, 1976\u20131981). What is typically meant here, however, is more akin to model fitting. This generally presupposes that one has models of the world, knows roughly where they are, and then fits them to image evidence. A more strict interpretation of inverse graphics targets the creation of a graphics program (Ritchie et al., 2023): a structured representation that can be used by a rendering engine to approximately reproduce the 3D scene. These programs are compact and interpretable representations of visual primitives (Wu et al., 2017; Jones et al., 2023), thereby aiding scene comprehension (Wu et al., 2017; Yi et al., 2018). The objective *Co-first author 1 arXiv:2404.15228v1 [cs.CV] 23 Apr 2024 add( size=\"big\", color=\"purple\", material=\"purple\", shape=\"cube\", loc=(-0.882, 1.742, 0.697), rot=-0.238, ) CLIP \u2744 LLM [IMG] What Python Bl ender Code ... Tokenizer What Python Blender code could be used to produce the scene? Sec. 4.1: Compositional Generalization \u21d2 Sec. 4.2: Parameter-Space Generalization \u21d2 Sec. 4.3: Visual-Domain Generalization \u21d2 Figure 1: IG-LLM. We present the Inverse-Graphics Large Language Model (IG-LLM) framework, a general approach to solving inverse-graphics problems. We instruction-tune an LLM to decode a visual (CLIP) embedding into graphics code that can be used to reproduce the observed scene using a standard graphics engine. Leveraging the broad reasoning abilities of LLMs, we demonstrate that our framework exhibits natural generalization across a variety of distribution shifts without the use of special inductive biases. extends beyond mere pixelor object-level interpretation of an image; it seeks to leverage the inherent spatial and physical relationships among objects that are essential for holistic scene understanding. Our goal is to generate such a graphics program from a single image capable of reproducing the 3D scene and its constituent objects using a traditional graphics engine. This approach, known as visual program induction, has garnered significant attention, encompassing works on a variety of problem domains. Wu et al. (2017) propose the concept of \u201cneural scene de-rendering,\u201d wherein custom markup code, translatable into rendererfriendly inputs, is inferred from images. While their method handles synthetic scenes featuring arbitrarily many objects, it grapples with generalization beyond its training-data distribution. Subsequent research (Yi et al., 2018) explores the utility of graphics programs for the downstream task of visual question answering (VQA). However, their scene-reconstruction method still struggles with generalization, particularly regarding objects with unseen attribute combinations (e.g., a known shape with a novel shape\u2013color combination). To address the problem of generalization, a method must possess a deep understanding of the visual world and its physical properties. Here, we explore whether we can exploit the generalization abilities of large language models (LLMs) for this purpose. LLMs have demonstrated remarkable performance across a wide variety of vision\u2013language tasks, ranging from producing detailed textual descriptions of images (Alayrac et al., 2022) and generating realistic images from text (Koh et al., 2023), to tasks such as visual question answering (Ouyang et al., 2022; OpenAI, 2023), visual instruction following (Liu et al., 2023; Zhu et al., 2023), and robot planning (Huang et al., 2022; Singh et al., 2023). Intriguingly, these models are designed with generic architectures and are initially trained with objectives that are not specifically tailored to a downstream task. The breadth of their training data endows them with the capacity for compositional reasoning about the world. However, their proficiency in conducting precise spatial reasoning within the 3D Euclidean world remains largely unexplored. This prompts the question: Can LLMs, originally used to address semantic-level queries, be applied to the precise realm of inverse-graphics tasks? And if so, how? To address these questions, we investigate the potential of LLMs to perform such tasks. We hypothesize that LLMs can be trained with simple demonstrations to perform precise inverse-graphics reasoning. This idea draws inspiration from instruction tuning (Taori et al., 2023; Chung et al., 2024) in the language-processing 2 domain, where LLMs acquire instruction-following skills after being fine-tuned on a limited set of curated data. We anticipate that LLMs, endowed with broad knowledge about the physical world, can be taught to recover accurate graphics programs from images beyond the training distribution. This insight motivates a re-evaluation of the conventional inverse-graphics pipeline, leading to the proposal of a new LLM-based framework: the Inverse-Graphics Large Language Model (IG-LLM). We fine-tune an LLM equipped with a pretrained text-aligned vision encoder, using an instruction-based synthetic dataset, and explore the model\u2019s capacity to infer graphics programs with accurate estimates of object quantity, shape, size, color, material, location, and orientation, as illustrated in Fig. 1. However, a question arises regarding the suitability of LLMs and natural language for generating the precise measurements necessary for inverse graphics, given the discrete nature of their token-based output. This constraint poses challenges for reasoning within metric spaces such as Euclidean space. To address this, we explore the integration of a numeric head in the language-based output (see Fig. 2b), where numbers are represented as continuous values rather than discrete-token sequences. We compare this approach and observe that our pipeline achieves improved precision and an expanded generalization capacity across evaluations. Our study is an examination of the adaptability of LLMs to novel domains and an attempt to understand how these powerful, semantically driven models can be repurposed and refined to gain a precise, metric understanding of the 3D world. While our investigation is preliminary, our work paves the way for further endeavors to capitalize on the rapid advancements in LLMs. The goal of this work is to assess the efficacy of LLMs in inverse-graphics tasks. We frame the problem as that of estimating a graphics program from a single image (see Fig. 1) and fine-tune an LLM using a small, synthetic dataset. We begin by analyzing the advantages this approach offers over traditional approaches in Sec. 3.1. Subsequently, we delineate the details of our methodology in Sec. 3.3. Finally, we elaborate on the design and motivation of the numeric head to enable precise metric reasoning in Sec. 3.4. 3.1 Traditional Neural Scene De-Rendering Our approach builds upon the concept of neural scene de-rendering introduced by Wu et al. (2017). In this framework, the goal is to develop a generalizable model capable of comprehensively understanding a scene by estimating a graphics program executable by a renderer. NS-VQA (Yi et al., 2018) is representative of this paradigm, where the task is addressed by decomposing the visual input using multiple modules with task-specific visual inductive biases. The method includes several components: a region-proposal network for object detection, a segmentation network for isolating the objects from the background, an attribute network for classifying various discrete graphics attributes, and a localization network for predicting the object\u2019s spatial location. Each network is independently trained in a supervised manner with a specific objective, and their outputs are aggregated to produce a structured representation of the scene. 3.2 What Do LLMs Offer? The broad success of LLMs can be largely attributed to their exceptional ability to generalize. Unlike models that rely on task-specific inductive biases or well-crafted training objectives, LLMs (Brown et al., 2020; Radford et al., 2019; Touvron et al., 2023) perform proficiently across a variety of language tasks with 4 relatively minor design differences and a simple training approach. This success can be attributed to the scale of the models and the sets of in-the-wild data on which they are trained. A particularly intriguing development in recent years has been the adaptation of LLMs to downstream tasks through instruction-tuning (Chung et al., 2024; Wang et al., 2023; Chiang et al., 2023), where LLMs are fine-tuned on a small set of curated task-specific training samples (e.g., 52K instruction-following examples in Stanford Alpaca (Taori et al., 2023)). This suggests a paradigm shift from traditional approaches, where generalization is often attained by scaling the amount of task-specific training data. LLMs are primarily trained via an unsupervised next-token-prediction objective to perform language-completion tasks, thereby unifying various natural language processing (NLP) tasks within a generic framework. Our research aims to explore whether this generalized approach can be effectively extended to the task of scene de-rendering while preserving their strong generalization capabilities. 3.3 Tuning LLMs for Inverse Graphics While LLMs are traditionally trained to complete language-token sequences, VQA works (Alayrac et al., 2022; Li et al., 2022b; Liu et al., 2023) have demonstrated that large pretrained vision transformers (Dosovitskiy et al., 2021; Radford et al., 2021) can be efficiently adapted as visual tokenizers. Such works unify image and language understanding, interleaving visual embeddings with language tokens for the LLM. In line with this approach, we adopt a similar strategy, constructing an LLM capable of \u201cseeing\u201d the input image and returning a structured code representation of the input scene. In the subsequent paragraphs, we detail the base architecture, elucidate the process of vision\u2013language alignment, and introduce our methodology for preparing synthetic data for visual instruction finetuning. A high-level overview of our pipeline can be seen in Fig. 1. Architecture. Our model is based on an instruction-tuned variant (Peng et al., 2023) of LLaMA-1 7B (Touvron et al., 2023) in conjunction with a frozen CLIP (Radford et al., 2021) vision encoder, serving as the visual tokenizer. We apply a learnable linear projection to link the vision embeddings with the word-embedding space of the LLM. Vision\u2013Language Alignment. The linear vision-encoder projection is initially trained using the feature-alignment pre-training method from LLaVA (Liu et al., 2023). This training uses instruction sequences constructed from image\u2013caption pairs sourced from the Conceptual Captions dataset (CC3M) (Sharma et al., 2018b). The LLM receives the projected image embedding, followed by a randomly sampled directive tasking the model to describe the image and its corresponding caption. Throughout this stage, all model parameters remain fixed except for the learnable vision projector. To ensure the generality of our model, we refrain from additional instruction tuning following this initial feature alignment. Training-Data Generation. CLEVR (Johnson et al., 2017) is a procedurally generated dataset of simple 3D objects on a plane. The primitives are assigned randomly sampled attributes such as shape (sphere, cube, and cylinder), size, color, material, and spatial pose. Shape, size, color, and material are discrete attributes, while pose is a continuous parameter specifying the object\u2019s location and orientation. The images from the sampled scenes are rendered using the Blender (2018) modeling software from its Python scripting API. Our domain-specific language consists of add functions, facilitating the insertion of objects with specified attributes into the scene using Blender. See Fig. 1 for an example of the representation of a single object. To train our model, we generate pairs of rendered images and their corresponding code, prompting the model with the rendered image followed by the question, \u201cWhat Python Blender code could be used to produce the scene?\u201d. The model is then trained with a standard next-token prediction objective (Bengio et al., 2000), aiming to maximize the conditional probability of the next token given the previous ones: p(x) = n Y i=1 p (si|s1, . . . si\u22121) , (1) where si represents the ith token. Numbers are rendered with three decimal places in the text-based (tokenized) training data. We order the add statements front-to-back in the objective token sequence and shuffle the order of the object attributes. See Figs. S.2 to S.6 for complete examples of each task. 5 (0 ... ... . 8 8 2 , (a) Discretized numerics , ( [NUM] ... ... MLP -0.882 (b) Continuous numerics Figure 2: Numeric Head. (Sec. 3.4) Rather than producing digits as discrete tokens (a), we train our model to generate a [NUM] token when a number should be produced. The [NUM] token is used as a mask to signal the embedding should instead be passed through the numeric head, preserving the gradient (b). Differences From Traditional Approaches. The framework presented in this study marks a departure from conventional approaches. Notably, the visual-encoding process does not include graphics-specific inductive biases in its design. It undergoes no training for intermediate vision tasks, such as object detection, segmentation, or attribute regression. The model operates solely on rendered images without access to 3D assets. Moreover, the supervised fine-tuning process employs a training objective not directly related to the physical representation of the scene. We show in Sec. 4 that these departures do not impede performance or generalization capabilities compared with traditional approaches; in fact, they enhance them. Our experiments demonstrate a compositionalgeneralization ability without the need for tailor-made designs, surpassing the conventional approach by approximately 60% in OOD shape-recognition accuracy. 3.4 Precise Numeric Reasoning in LLMs Graphics programming requires the precise estimation of continuous quantities such as location and orientation, along with a comprehension of Euclidean space. This requirement extends beyond the coarse, semantic-level spatial reasoning (e.g. \u201cleft,\u201d \u201cin front of\u201d) for which LLMs are typically employed, in tasks such as VQA (Antol et al., 2015). Estimating continuous values through character-based outputs essentially transforms the task into a discrete, combinatorial challenge. In this loss space, prediction errors do not reflect real metric distances \u2013 a ground truth value of \u20184\u2019 is considered as close to a prediction of \u20183\u2019 as it is to \u20188,\u2019 highlighting the inherent limitations of this approach for tasks demanding high numerical precision. To address this challenge, we introduce a numeric head tailored to enable continuous parameter estimation. A visual representation of the numeric-head integration is depicted in Fig. 2b, contrasting with the discrete text-based alternative. The module is implemented as a four-layer MLP that processes the final hidden-layer output of the LLM and transforms it into a scalar value. To allow the LLM to discern between generating numerical values or textual information, we designate a special token in the vocabulary \u2013 [NUM] \u2013 which serves as a mask to indicate whether a number should be produced. During training, we apply an MSE loss on each number in addition to the next-token prediction loss (Eq. (1)) used on the [NUM] token itself. We systematically investigate the behavior of character-based and numeric IG-LLM variants for precise spatial reasoning in Sec. 4.2. Our empirical findings support our intuition regarding the limitations of the character-based output and demonstrate that the numeric head enables strong generalization when the testing samples are OOD in parameter space. These differences are highlighted throughout our evaluation. 6 (a) Input (b) NS-VQA (c) Ours Figure 3: OOD CLEVR-CoGenT Samples. (Sec. 4.1) NS-VQA, with its modular design, fails to disentangle shape from color, while our framework is able to effectively generalize to OOD attribute combinations. See Fig. S.7 for additional samples. 4 Evaluations To evaluate the ability of our proposed framework to generalize across distribution shifts, we design a number of focused evaluation settings. We conduct experiments on synthetic data in order to quantitatively analyze model capability under controlled shifts. 4.1 Compositional Generalization on CLEVR An extension to CLEVR, known as CLEVR-CoGenT (Johnson et al., 2017), serves as a benchmark for evaluating the compositional-generalization capabilities of VQA models. This benchmark assesses the model\u2019s ability to answer questions about scenes containing objects with unseen combinations of attributes. During training, the dataset is structured such that particular types of objects are only assigned specific combinations of attributes (e.g. blue cubes and red cylinders), while the testing data includes objects with attribute combinations not seen during training (e.g. red cubes and blue cylinders). We adapt this VQA dataset to our inverse-graphics problem domain, employing it for three primary purposes: 1) demonstrating that LLMs can effectively perform inverse graphics by testing on in-distribution (ID) data; 2) illustrating that LLMs exhibit robust compositional generalization to OOD data, while the baseline approach in NS-VQA (Yi et al., 2018) struggles in this setting; and 3) exploring the data-efficiency of our framework. Setting. Following the setting of CLEVR-CoGenT, our training set consists of images of scenes containing objects with only a subset of possible attribute combinations (shape, size, material, and color). In the ID condition, all cubes are rendered in gray, blue, brown, or yellow, and all cylinders are depicted in red, green, purple, or cyan. In contrast, in the OOD condition the color palettes of the shapes are swapped. Spheres are consistently depicted with all eight colors under both conditions. We train both our proposed framework and NS-VQA, our neural-scene de-rendering baseline, on 4k images from the ID condition and evaluate them on 1k images from both the ID and OOD conditions. We follow CLEVR and randomly apply their set of synonyms on the categorical attributes. Evaluation Metrics. To evaluate attribute-recognition accuracy, we employ linear-sum assignment on pairwise Euclidean distances to match predicted and ground-truth objects. However, since attribute-recognition accuracy does not account for missing or duplicated objects, we also evaluate the method\u2019s ability to produce 7 Table 1: CLEVR-CoGenT Results. (Sec. 4.1) While both our proposed framework and the baseline, NS-VQA, and are able to achieve >99% accuracy on the ID condition, the baseline fails to generalize, with its shape-recognition accuracy dropping by 66.12%. Color, Mat., and Shape represent respective accuracies and \u2191indicates greater is better. ID OOD Char Float NS-VQA Char Float NS-VQA \u2193L2 0.21 0.16 0.18 0.22 0.17 0.18 \u2191Size 99.71 99.77 100.00 99.74 99.80 100.00 \u2191Color 99.58 99.71 100.00 98.60 98.14 99.95 \u2191Shape 99.51 99.59 100.00 93.50 93.14 33.88 accurate counts by computing the mean-absolute counting error between the predicted and ground-truth object sets across scenes (Count). 500 1000 2000 4000 Training Samples 0.00 0.25 0.50 Validation L2 Char (ID) Float (ID) Char (OOD) Float (OOD) Figure 4: CLEVR Data Efficiency. (Sec. 4.1) Plot of the validation L2 positional error by the number of training samples. We observe that the float-based model is consistently more dataefficient but that the difference between the models converges as the number of training samples reaches 4000. See Tab. S.1 for a full quantitative comparison. Results. Both our proposed framework and NS-VQA achieve >99% accuracy on the ID condition (Tab. 1), which underscores LLMs\u2019 ability to perform comparably with domain-specific modular designs. However, when evaluated on the OOD condition, the shape-recognition accuracy of the baseline method drops substantially by 66.12%, while the accuracy of our pipeline decreases by only 6.01%. Notably, when spheres, observed with all colors during training, are removed from the evaluation, the shape accuracy of NS-VQA plummets further to 0.03%. Illustrative reconstructions from the OOD condition can be seen in Fig. 3. In terms of data efficiency, we find the float-based model to be much more efficient than the char-based model in estimating the positions of objects, but the difference diminishes as the number of samples reaches 4k (Fig. 4). Hypothesizing that the char-based model has learned to compositionally retrieve the exact positions of (individual) objects from the training set rather than learning to effectively interpolate between training values, we measure the likelihood of predicting arbitrary three-decimal values. Our findings reveal that the charbased model is 6.41 times more likely to predict a particular value if that discrete value was observed during training. We further explore float-estimation dynamics in Sec. 4.2. 4.2 Numeric Parameter-Space Generalization In this section, we investigate the addition of a numeric head and the ability of our framework to generalize across parameter space. 4.2.1 2D Parameter Space We begin by scrutinizing the framework\u2019s ability to generalize in 2D parameter space across range gaps. To accomplish this, we create a dataset comprising 10k images, each featuring a red dot on a white background. During training, the model is shown images where the location of the dot is sampled from a sparse checkerboard grid, as shown in Fig. 5a. During evaluation, the model is shown 1k images where the dot\u2019s location is uniformly sampled across the square; points lying outside the checkerboard are effectively OOD inputs. Results. As shown in Fig. 5b, the char-based model exhibits significant overfitting to the training distribution, consistently predicting dot locations restricted to the checkerboard distribution observed during training. In contrast, the float-based model is able to effectively generalize across parameter space, adapting 8 (a) Train Distribution (b) Char Model Pred. (c) Float Model Pred. Char ID Float ID Char OOD Float OOD 0 5 10 15 20 Validation L2 (d) ID\u2013OOD 10k 20k 30k 40k Training Step 0 200 400 600 800 1000 Validation MSE Char Float (e) Dynamics Figure 5: 2D Parameter-Space Generalization. (Sec. 4.2.1) (a) Training positions are sampled from the checkerboard. When evaluated on images with uniformly sampled positions, the char-based model fails to generalize outside the training distribution (b) while the float-based model effectively interpolates samples (c). Randomly-sampled testing locations are shown in red and the corresponding predictions in blue. (d) shows that while both methods well-estimate samples from the ID condition, the char-based model struggles to generalize. (e) shows a plot of the model\u2019s validation MSE as a function of the number of training steps. We observe that the training of the float-based model is much smoother and converges quickly. to the uniformly sampled testing distribution during evaluation (Fig. 5c). Although the float-based model exhibits a slight positional bias toward predicting positions on the grid (as evidenced by the higher OOD error), the disparity in the ID\u2013OOD validation-L2 performance gap of the char-based model is 14 times as high as that of the float-based model (Fig. 5d). Moreover, the validation MSE of the float-based model converges quickly to near zero, while the error of the char-based model is much less stable over time (Fig. 5e), suggesting that the float-based model learns smooth, low-dimensional representations of the space while the char-based variant may not. 4.2.2 SO(3) Parameter Space (a) Range-Gap Vis. Char ID Float ID Char OOD Float OOD 0 10 20 30 40 50 60 Validation Geod. (b) ID\u2013OOD Figure 6: SO(3) Range Gap. (Sec. 4.2.2) (a) Visualization of the SO(3) range-gap sampling space. For training, Euler rotation components are sampled from the blue regions. In OOD, samples are exclusively sampled from the red ranges. (b) The floatbased model outperforms the char-based variant on validation data sampled from both the ID and OOD conditions. We continue our parameter-space evaluation within the more complex task of SO(3)-pose estimation of orientable objects. For this, we make use of five toy-airplane assets sourced from Super-CLEVR (Li et al., 2023). We construct a training dataset of 10k images of single planes at a fixed location and sampled attributes identical to those in CLEVR. Extending the range-gap setup used in Sec. 4.2.1, the airplanes are assigned random extrinsic-Euler rotations, where the components are sampled from ranges containing inserted gaps (e.g., [\u2212\u03c0 20, \u03c0 20]). A visual depiction of this space is provided in Fig. 6a, with the training values exclusively sampled from the blue ranges. During testing, we invert the gaps to assess OOD generalization. We evaluate performance across intrinsic-Euler, extrinsicEuler, axis-angle, and 6D (Zhou et al., 2019) representations. Results. We report full results in Tab. S.2 and here discuss only results from the best-performing representation variants in each evaluation, being intrinsic-Euler for char and 6D for float in ID, and axis-angle for both in OOD. We do so to avoid biasing results with a representation that is better suited for one model variant or the other. As depicted in Fig. 6b, the error of the char-based model is 2.64 times higher than that of the float-based model when evaluated in-distribution. Upon testing in the OOD condition, the disparity is nearly consistent at 2.52 times that observed in the ID scenario, with the ID\u2013OOD gap of the char-based model being 2.21 times that observed in the float-based variant. We attribute the superiority of the float-based model across both conditions to the increased data dimensionality. Additionally, the lesser performance decline observed when 9 Figure 7: OOD Single-Object 6-DoF Samples. (Sec. 4.3.1) A sample 6-DoF reconstruction of real-world images. The model is finetuned with only Blender renderings of toy airplanes that have a white backdrop. See Fig. S.10 for additional samples. evaluating on the OOD training gaps further underscores the parameter-space efficiency of the float-based model. 4.3 6-DoF Pose Estimation We examine the ability of our framework to scale in tackling a more challenging inverse-graphics task: that of 6-DoF-pose estimation. Our exploration begins with an evaluation on single-object images, encompassing both quantitative and qualitative assessments, where we illustrate the framework\u2019s ability to generalize across visual domain shifts. We subsequently extend the setting to include more-complex (albeit, synthetic) multiobject scenes, demonstrating promising results for scene estimation, handling larger collections (>100) of diverse assets. 4.3.1 Single-Object 6-DoF We first evaluate our framework\u2019s ability to scale to single-object 6-DoF pose estimation. The floatand char-based models are assessed quantitatively using rendered images. Setting. We extend the setting used in Sec. 4.2.2 but unfreeze the previously fixed 3D position and assign it a randomly sampled value. We expand the number of colors used in the dataset to 1331 to better emulate the diversity observed in the real world. Differing from the previous setup, we fix the size of the objects due to the relative depth\u2013scale ambiguity of the toy airplanes. To evaluate our framework\u2019s ability to scale beyond data-constrained scenarios, we render a training dataset of one-million images. Following the rotation-representation results of Sec. 4.2.2, we use the intrinsic-Euler representation for the char-based model and the 6D representation for the float-based model as their use led to the greatest ID performance. Results. Tab. 2 illustrates that, under this non-data-constrained scenario, both model variants effectively capture the dynamics of the task The models both notably exhibit an order of magnitude lower positional error than in the CLEVR setting, despite the addition of 3D orientation and an additional positional dimension, and achieve rotational error 28% of that observed in the ID portion of the SO(3) range-gap evaluation. 1https://simple.wikipedia.org/wiki/List_of_Crayola_crayon_colors Table 2: Single-Object 6-DoF Results. (Sec. 4.3.1) When evaluating on ID data in the one-million-sample single-object 6-DoF eval, we observe little difference between models; both well-capture the distribution. Geod. represents geodesic distance in degrees. \u2193L2 \u2193Geod. \u2191Color \u2191Mat. \u2191Shape Char 0.02 5.03 79.10 99.00 99.80 Float 0.04 6.18 81.90 99.00 100.00 10 Figure 8: OOD ShapeNet 6-DoF Samples. (Sec. 4.3.2) Two sample reconstructions from the OOD ShapeNet 6-Dof pose-estimation experiment. Left to right: input, output. We evaluate on assets not shown during training, with out-of-distribution textures. See Fig. S.9 for additional samples. This reinforces the earlier observation from the CLEVR data-efficiency evaluation that, given sufficient data, the model variants exhibit a similar performance ceiling. Still, neither achieves the level of precision necessary to be directly constrained by the three-decimal-place discretization applied to numeric quantities throughout the evaluations nor the 16-bit training precision in the case of the float-based model. See Appendix A for further training details. As part of our evaluation, we also qualitatively examine the ability of the model to transfer from the renders of toy planes with a solid-white background, on which it was fine-tuned, to estimating the pose and attributes of planes in real-world images. We provide qualitative samples of our model\u2019s generalization to such images in Fig. 7. We observe encouraging generalization across the majority of images tested, despite the lack of augmentation or domain-specific inductive bias applied during the training process. However, it is difficult to quantitatively evaluate such model performance due to a lack of paired real-world data in line with our compositional task. As a proxy for such an evaluation, we introduce a synthetic setting in Sec. 4.3.2 to quantitatively evaluate the ability of our framework to generalize across visual domains. 4.3.2 Scene-Level 6-DoF In this section, we explore the scalability of our framework to scene-level 6-DoF-pose estimation, featuring 3-5 objects per scene and a much-expanded array of assets. This experiment not only assesses performance under more-challenging conditions, but also enables a quantitative evaluation on the framework\u2019s ability to generalize to scenes with OOD visual appearance. Setting. We construct an expanded CLEVR-like image\u2013scene dataset, incorporating objects sourced from ShapeNet (Chang et al., 2015). The dataset comprises 56 chair types, 35 sofa types, and 47 table types. Table 3: ShapeNet 6-DoF Results. (Sec. 4.3.2) The float-based model outperforms the char-based variant across all evaluations. Chamf. represents the Chamfer distance between the ground-truth and estimated scenes. Cat. represents category accuracy (sofa, chair, table). ID OOD-T OOD-T+S Char Float Char Float Char Float \u2193L2 0.22 0.18 0.31 0.26 0.52 0.40 \u2193Geod. 8.14 5.65 14.40 10.48 45.11 43.46 \u2193Count 0.01 0.01 0.08 0.08 0.09 0.08 \u2191Color 77.07 83.42 N/A N/A N/A N/A \u2191Shape 89.21 93.31 68.72 78.26 N/A N/A \u2191Cat. 97.27 98.33 94.55 96.58 85.99 86.71 \u2193Chamf. 0.45 0.22 1.17 0.57 14.63 2.58 11 We remove the size and material attributes used in CLEVR, but employ the expanded color set used in Sec. 4.3.1 to randomly color the objects. After doing so, the total number of possible combinations of attributes is 191-fold that used in the CLEVR-CoGenT experiment. Differing from previous evaluations, we also vary the pitch of the camera and the radius of its arc, but maintain a fixed camera focal point. Returning from the million-image single-object 6-DoF evaluation to a relatively data-constrained setting, we render 10k training images and evaluate the framework on three conditions, each with 1K images: (1) ID, which matches the training distribution of scenes with solid-colored objects; (2) OOD texture (OOD-T), where the same object assets are used as in ID but the objects are rendered with original ShapeNet textures instead of the randomly assigned solid colors; and (3) OOD encompassing both unseen objects and original ShapeNet textures (OOD-T+S). We use this to emulate the distribution shift of modeling real-world scenes, while facilitating quantitative evaluation. Results. We observe that both approaches scale to the task, though the float-based model outperforms \u2013 or ties with \u2013 the char-based variant across evaluations (Tab. 3). This disparity is emphasized in the OOD-T+S setting where scene-level chamfer distance of the char-based model jumps from being approximately twice that of the float-based variant in the ID and OOD-T evaluations to being 5.67 times as much. There is a decrease in performance observed when stepping to the OOD-T setting, which is most-strongly observed in the count error (x8 for both) and the shape-recognition accuracy (-20.49% in char and -15.05% in float). We empirically attribute this to the model occasionally explaining some multi-color textured objects using a composition of multiple, solid-color assets. Quantitatively supporting this, the performance decrease is not as strongly reflected in scene-level chamfer distance (x2.6 for both). See Fig. 8 for samples reconstructions from OOD-T+S and Fig. S.8 for samples from the ID setting. We additionally test our model on real-world samples, but find that it fails to consistently generalize (Fig. S.1). We attribute this failure partially to limitations in the camera-position training distribution. 5 Discussion and Limitations Through our investigation, we demonstrated the ability of LLMs to facilitate inverse-graphics tasks across a variety of domain shifts, albeit within controlled settings. In designing targeted evaluations to analyze the model\u2019s generalization ability, our goal was to lay the groundwork necessary for future advancements. However, scaling up these models to metrically reconstruct complex real-world scenes will undoubtedly pose additional challenges. The primary limitation of our approach lies in that its expressiveness is constrained by the expressiveness of the training-data-generation framework. We demonstrated its ability to learn to compositionally disentangle images of scenes into constituent elements, reconstructing scenes under distribution shifts. However, reproducing scenes as text, it can reconstruct scenes containing unknown objects in OOD configurations, but it does so in terms of the objects \u2013 and language \u2013 it is trained with. If it does not know the name of asset chairs_0055, it will not be able to use it. Even if the model produces the name of a new color or shape from outside of the training data, the graphics engine rendering the LLM output must have an understanding of it in order to apply it. In contrast, the generality of our approach, which doesn\u2019t incorporate special task-specific inductive biases, allows it to scale with the diversity of the training data or the expressivity of the code format. Future work may explore more-scalable training-data generators or integrate self-supervision techniques to enable learning from unlabeled images. While we employ a relatively straightforward object-centric code representation across experiments for simplicity, more-expressive scene representations should also be explored. Our evaluation scenes feature only minor object occlusions and are relatively simple. While a generic next-token objective paired with MSE float supervision sufficed for these scenarios, addressing harder-todisentangle scenes may require a trade-off between generality and inductive bias, to incorporate additional supervision. 12 6 Conclusion In this work, we investigated the ability of LLMs to solve inverse-graphics challenges. Introducing the InverseGraphics Large-Language-Model (IG-LLM) framework, we demonstrated that the broad generalization and reasoning capabilities of LLMs can be harnessed to facilitate inverse-graphics tasks. Through extensive evaluation, we assessed the model\u2019s capacity to generalize out-of-domain, revealing its ability to abstract scene elements compositionally. We additionally explored the integration of a numeric head to adapt LLMs for continuous metric-value estimation, providing enhanced generalization and smoother training dynamics. Our quantitative analyses demonstrate its ability to generalize compositionally (Sec. 4.1), in parameter space (Sec. 4.2), and across visual domains (Sec. 4.3). Our investigation demonstrates the ability of IG-LLM to leverage the general knowledge of LLMs in solving inverse-graphics problems, opening a new avenue for research. Acknowledgements We thank Silvia Zuffi for useful discussions and Benjamin Pellkofer for IT support. Disclosure MJB has received research gift funds from Adobe, Intel, Nvidia, Meta, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a consultant for Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society. 13" + }, + { + "url": "http://arxiv.org/abs/2008.09092v1", + "title": "Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation", + "abstract": "Procedural models are being widely used to synthesize scenes for graphics,\ngaming, and to create (labeled) synthetic datasets for ML. In order to produce\nrealistic and diverse scenes, a number of parameters governing the procedural\nmodels have to be carefully tuned by experts. These parameters control both the\nstructure of scenes being generated (e.g. how many cars in the scene), as well\nas parameters which place objects in valid configurations. Meta-Sim aimed at\nautomatically tuning parameters given a target collection of real images in an\nunsupervised way. In Meta-Sim2, we aim to learn the scene structure in addition\nto parameters, which is a challenging problem due to its discrete nature.\nMeta-Sim2 proceeds by learning to sequentially sample rule expansions from a\ngiven probabilistic scene grammar. Due to the discrete nature of the problem,\nwe use Reinforcement Learning to train our model, and design a feature space\ndivergence between our synthesized and target images that is key to successful\ntraining. Experiments on a real driving dataset show that, without any\nsupervision, we can successfully learn to generate data that captures discrete\nstructural statistics of objects, such as their frequency, in real images. We\nalso show that this leads to downstream improvement in the performance of an\nobject detector trained on our generated dataset as opposed to other baseline\nsimulation methods. Project page:\nhttps://nv-tlabs.github.io/meta-sim-structure/.", + "authors": "Jeevan Devaranjan, Amlan Kar, Sanja Fidler", + "published": "2020-08-20", + "updated": "2020-08-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG", + "eess.IV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.00312v1", + "title": "3DP3: 3D Scene Perception via Probabilistic Programming", + "abstract": "We present 3DP3, a framework for inverse graphics that uses inference in a\nstructured generative model of objects, scenes, and images. 3DP3 uses (i) voxel\nmodels to represent the 3D shape of objects, (ii) hierarchical scene graphs to\ndecompose scenes into objects and the contacts between them, and (iii) depth\nimage likelihoods based on real-time graphics. Given an observed RGB-D image,\n3DP3's inference algorithm infers the underlying latent 3D scene, including the\nobject poses and a parsimonious joint parametrization of these poses, using\nfast bottom-up pose proposals, novel involutive MCMC updates of the scene graph\nstructure, and, optionally, neural object detectors and pose estimators. We\nshow that 3DP3 enables scene understanding that is aware of 3D shape,\nocclusion, and contact structure. Our results demonstrate that 3DP3 is more\naccurate at 6DoF object pose estimation from real images than deep learning\nbaselines and shows better generalization to challenging scenes with novel\nviewpoints, contact, and partial observability.", + "authors": "Nishad Gothoskar, Marco Cusumano-Towner, Ben Zinberg, Matin Ghavamizadeh, Falk Pollok, Austin Garrett, Joshua B. Tenenbaum, Dan Gutfreund, Vikash K. Mansinghka", + "published": "2021-10-30", + "updated": "2021-10-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1411.6067v2", + "title": "Viewpoints and Keypoints", + "abstract": "We characterize the problem of pose estimation for rigid objects in terms of\ndetermining viewpoint to explain coarse pose and keypoint prediction to capture\nthe finer details. We address both these tasks in two different settings - the\nconstrained setting with known bounding boxes and the more challenging\ndetection setting where the aim is to simultaneously detect and correctly\nestimate pose of objects. We present Convolutional Neural Network based\narchitectures for these and demonstrate that leveraging viewpoint estimates can\nsubstantially improve local appearance based keypoint predictions. In addition\nto achieving significant improvements over state-of-the-art in the above tasks,\nwe analyze the error modes and effect of object characteristics on performance\nto guide future efforts towards this goal.", + "authors": "Shubham Tulsiani, Jitendra Malik", + "published": "2014-11-22", + "updated": "2015-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1612.00404v4", + "title": "Learning Shape Abstractions by Assembling Volumetric Primitives", + "abstract": "We present a learning framework for abstracting complex shapes by learning to\nassemble objects using 3D volumetric primitives. In addition to generating\nsimple and geometrically interpretable explanations of 3D objects, our\nframework also allows us to automatically discover and exploit consistent\nstructure in the data. We demonstrate that using our method allows predicting\nshape representations which can be leveraged for obtaining a consistent parsing\nacross the instances of a shape collection and constructing an interpretable\nshape similarity measure. We also examine applications for image-based\nprediction as well as shape manipulation.", + "authors": "Shubham Tulsiani, Hao Su, Leonidas J. Guibas, Alexei A. Efros, Jitendra Malik", + "published": "2016-12-01", + "updated": "2018-08-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2007.13034v1", + "title": "Mask2CAD: 3D Shape Prediction by Learning to Segment and Retrieve", + "abstract": "Object recognition has seen significant progress in the image domain, with\nfocus primarily on 2D perception. We propose to leverage existing large-scale\ndatasets of 3D models to understand the underlying 3D structure of objects seen\nin an image by constructing a CAD-based representation of the objects and their\nposes. We present Mask2CAD, which jointly detects objects in real-world images\nand for each detected object, optimizes for the most similar CAD model and its\npose. We construct a joint embedding space between the detected regions of an\nimage corresponding to an object and 3D CAD models, enabling retrieval of CAD\nmodels for an input RGB image. This produces a clean, lightweight\nrepresentation of the objects in an image; this CAD-based representation\nensures a valid, efficient shape representation for applications such as\ncontent creation or interactive scenarios, and makes a step towards\nunderstanding the transformation of real-world imagery to a synthetic domain.\nExperiments on real-world images from Pix3D demonstrate the advantage of our\napproach in comparison to state of the art. To facilitate future research, we\nadditionally propose a new image-to-3D baseline on ScanNet which features\nlarger shape diversity, real-world occlusions, and challenging image views.", + "authors": "Weicheng Kuo, Anelia Angelova, Tsung-Yi Lin, Angela Dai", + "published": "2020-07-26", + "updated": "2020-07-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "eess.IV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.15632v1", + "title": "ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing", + "abstract": "Sketch-and-extrude is a common and intuitive modeling process in computer\naided design. This paper studies the problem of learning the shape given in the\nform of point clouds by inverse sketch-and-extrude. We present ExtrudeNet, an\nunsupervised end-to-end network for discovering sketch and extrude from point\nclouds. Behind ExtrudeNet are two new technical components: 1) an effective\nrepresentation for sketch and extrude, which can model extrusion with freeform\nsketches and conventional cylinder and box primitives as well; and 2) a\nnumerical method for computing the signed distance field which is used in the\nnetwork learning. This is the first attempt that uses machine learning to\nreverse engineer the sketch-and-extrude modeling process of a shape in an\nunsupervised fashion. ExtrudeNet not only outputs a compact, editable and\ninterpretable representation of the shape that can be seamlessly integrated\ninto modern CAD software, but also aligns with the standard CAD modeling\nprocess facilitating various editing applications, which distinguishes our work\nfrom existing shape parsing research. Code is released at\nhttps://github.com/kimren227/ExtrudeNet.", + "authors": "Daxuan Ren, Jianmin Zheng, Jianfei Cai, Jiatong Li, Junzhe Zhang", + "published": "2022-09-30", + "updated": "2022-09-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1902.06729v2", + "title": "3D Scene Reconstruction with Multi-layer Depth and Epipolar Transformers", + "abstract": "We tackle the problem of automatically reconstructing a complete 3D model of\na scene from a single RGB image. This challenging task requires inferring the\nshape of both visible and occluded surfaces. Our approach utilizes\nviewer-centered, multi-layer representation of scene geometry adapted from\nrecent methods for single object shape completion. To improve the accuracy of\nview-centered representations for complex scenes, we introduce a novel\n\"Epipolar Feature Transformer\" that transfers convolutional network features\nfrom an input view to other virtual camera viewpoints, and thus better covers\nthe 3D scene geometry. Unlike existing approaches that first detect and\nlocalize objects in 3D, and then infer object shape using category-specific\nmodels, our approach is fully convolutional, end-to-end differentiable, and\navoids the resolution and memory limitations of voxel representations. We\ndemonstrate the advantages of multi-layer depth representations and epipolar\nfeature transformers on the reconstruction of a large database of indoor\nscenes.", + "authors": "Daeyun Shin, Zhile Ren, Erik B. Sudderth, Charless C. Fowlkes", + "published": "2019-02-18", + "updated": "2019-08-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1906.01618v2", + "title": "Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations", + "abstract": "Unsupervised learning with generative models has the potential of discovering\nrich representations of 3D scenes. While geometric deep learning has explored\n3D-structure-aware representations of scene geometry, these models typically\nrequire explicit 3D supervision. Emerging neural scene representations can be\ntrained only with posed 2D images, but existing methods ignore the\nthree-dimensional structure of scenes. We propose Scene Representation Networks\n(SRNs), a continuous, 3D-structure-aware scene representation that encodes both\ngeometry and appearance. SRNs represent scenes as continuous functions that map\nworld coordinates to a feature representation of local scene properties. By\nformulating the image formation as a differentiable ray-marching algorithm,\nSRNs can be trained end-to-end from only 2D images and their camera poses,\nwithout access to depth or shape. This formulation naturally generalizes across\nscenes, learning powerful geometry and appearance priors in the process. We\ndemonstrate the potential of SRNs by evaluating them for novel view synthesis,\nfew-shot reconstruction, joint shape and appearance interpolation, and\nunsupervised discovery of a non-rigid face model.", + "authors": "Vincent Sitzmann, Michael Zollh\u00f6fer, Gordon Wetzstein", + "published": "2019-06-04", + "updated": "2020-01-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "I.2.10; I.4.5; I.4.8; I.4.10" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.05624v1", + "title": "Robust Category-Level 6D Pose Estimation with Coarse-to-Fine Rendering of Neural Features", + "abstract": "We consider the problem of category-level 6D pose estimation from a single\nRGB image. Our approach represents an object category as a cuboid mesh and\nlearns a generative model of the neural feature activations at each mesh vertex\nto perform pose estimation through differentiable rendering. A common problem\nof rendering-based approaches is that they rely on bounding box proposals,\nwhich do not convey information about the 3D rotation of the object and are not\nreliable when objects are partially occluded. Instead, we introduce a\ncoarse-to-fine optimization strategy that utilizes the rendering process to\nestimate a sparse set of 6D object proposals, which are subsequently refined\nwith gradient-based optimization. The key to enabling the convergence of our\napproach is a neural feature representation that is trained to be scale- and\nrotation-invariant using contrastive learning. Our experiments demonstrate an\nenhanced category-level 6D pose estimation performance compared to prior work,\nparticularly under strong partial occlusion.", + "authors": "Wufei Ma, Angtian Wang, Alan Yuille, Adam Kortylewski", + "published": "2022-09-12", + "updated": "2022-09-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.04334v1", + "title": "Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation", + "abstract": "We present Panoptic Neural Fields (PNF), an object-aware neural scene\nrepresentation that decomposes a scene into a set of objects (things) and\nbackground (stuff). Each object is represented by an oriented 3D bounding box\nand a multi-layer perceptron (MLP) that takes position, direction, and time and\noutputs density and radiance. The background stuff is represented by a similar\nMLP that additionally outputs semantic labels. Each object MLPs are\ninstance-specific and thus can be smaller and faster than previous object-aware\napproaches, while still leveraging category-specific priors incorporated via\nmeta-learned initialization. Our model builds a panoptic radiance field\nrepresentation of any scene from just color images. We use off-the-shelf\nalgorithms to predict camera poses, object tracks, and 2D image semantic\nsegmentations. Then we jointly optimize the MLP weights and bounding box\nparameters using analysis-by-synthesis with self-supervision from color images\nand pseudo-supervision from predicted semantic segmentations. During\nexperiments with real-world dynamic scenes, we find that our model can be used\neffectively for several tasks like novel view synthesis, 2D panoptic\nsegmentation, 3D scene editing, and multiview depth prediction.", + "authors": "Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser", + "published": "2022-05-09", + "updated": "2022-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.05473v2", + "title": "Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives", + "abstract": "Given a set of calibrated images of a scene, we present an approach that\nproduces a simple, compact, and actionable 3D world representation by means of\n3D primitives. While many approaches focus on recovering high-fidelity 3D\nscenes, we focus on parsing a scene into mid-level 3D representations made of a\nsmall set of textured primitives. Such representations are interpretable, easy\nto manipulate and suited for physics-based simulations. Moreover, unlike\nexisting primitive decomposition methods that rely on 3D input data, our\napproach operates directly on images through differentiable rendering.\nSpecifically, we model primitives as textured superquadric meshes and optimize\ntheir parameters from scratch with an image rendering loss. We highlight the\nimportance of modeling transparency for each primitive, which is critical for\noptimization and also enables handling varying numbers of primitives. We show\nthat the resulting textured primitives faithfully reconstruct the input images\nand accurately model the visible 3D points, while providing amodal shape\ncompletions of unseen object regions. We compare our approach to the state of\nthe art on diverse scenes from DTU, and demonstrate its robustness on real-life\ncaptures from BlendedMVS and Nerfstudio. We also showcase how our results can\nbe used to effortlessly edit a scene or perform physical simulations. Code and\nvideo results are available at https://www.tmonnier.com/DBW .", + "authors": "Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei A. Efros, Mathieu Aubry", + "published": "2023-07-11", + "updated": "2023-12-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2002.12212v1", + "title": "Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes from a Single Image", + "abstract": "Semantic reconstruction of indoor scenes refers to both scene understanding\nand object reconstruction. Existing works either address one part of this\nproblem or focus on independent objects. In this paper, we bridge the gap\nbetween understanding and reconstruction, and propose an end-to-end solution to\njointly reconstruct room layout, object bounding boxes and meshes from a single\nimage. Instead of separately resolving scene understanding and object\nreconstruction, our method builds upon a holistic scene context and proposes a\ncoarse-to-fine hierarchy with three components: 1. room layout with camera\npose; 2. 3D object bounding boxes; 3. object meshes. We argue that\nunderstanding the context of each component can assist the task of parsing the\nothers, which enables joint understanding and reconstruction. The experiments\non the SUN RGB-D and Pix3D datasets demonstrate that our method consistently\noutperforms existing methods in indoor layout estimation, 3D object detection\nand mesh reconstruction.", + "authors": "Yinyu Nie, Xiaoguang Han, Shihui Guo, Yujian Zheng, Jian Chang, Jian Jun Zhang", + "published": "2020-02-27", + "updated": "2020-02-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.05473v2", + "title": "Differentiable Blocks World: Qualitative 3D Decomposition by Rendering Primitives", + "abstract": "Given a set of calibrated images of a scene, we present an approach that\nproduces a simple, compact, and actionable 3D world representation by means of\n3D primitives. While many approaches focus on recovering high-fidelity 3D\nscenes, we focus on parsing a scene into mid-level 3D representations made of a\nsmall set of textured primitives. Such representations are interpretable, easy\nto manipulate and suited for physics-based simulations. Moreover, unlike\nexisting primitive decomposition methods that rely on 3D input data, our\napproach operates directly on images through differentiable rendering.\nSpecifically, we model primitives as textured superquadric meshes and optimize\ntheir parameters from scratch with an image rendering loss. We highlight the\nimportance of modeling transparency for each primitive, which is critical for\noptimization and also enables handling varying numbers of primitives. We show\nthat the resulting textured primitives faithfully reconstruct the input images\nand accurately model the visible 3D points, while providing amodal shape\ncompletions of unseen object regions. We compare our approach to the state of\nthe art on diverse scenes from DTU, and demonstrate its robustness on real-life\ncaptures from BlendedMVS and Nerfstudio. We also showcase how our results can\nbe used to effortlessly edit a scene or perform physical simulations. Code and\nvideo results are available at https://www.tmonnier.com/DBW .", + "authors": "Tom Monnier, Jake Austin, Angjoo Kanazawa, Alexei A. Efros, Mathieu Aubry", + "published": "2023-07-11", + "updated": "2023-12-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.02485v2", + "title": "Building Cooperative Embodied Agents Modularly with Large Language Models", + "abstract": "In this work, we address challenging multi-agent cooperation problems with\ndecentralized control, raw sensory observations, costly communication, and\nmulti-objective tasks instantiated in various embodied environments. While\nprevious research either presupposes a cost-free communication channel or\nrelies on a centralized controller with shared observations, we harness the\ncommonsense knowledge, reasoning ability, language comprehension, and text\ngeneration prowess of LLMs and seamlessly incorporate them into a\ncognitive-inspired modular framework that integrates with perception, memory,\nand execution. Thus building a Cooperative Embodied Language Agent CoELA, who\ncan plan, communicate, and cooperate with others to accomplish long-horizon\ntasks efficiently. Our experiments on C-WAH and TDW-MAT demonstrate that CoELA\ndriven by GPT-4 can surpass strong planning-based methods and exhibit emergent\neffective communication. Though current Open LMs like LLAMA-2 still\nunderperform, we fine-tune a CoELA with data collected with our agents and show\nhow they can achieve promising performance. We also conducted a user study for\nhuman-agent interaction and discovered that CoELA communicating in natural\nlanguage can earn more trust and cooperate more effectively with humans. Our\nresearch underscores the potential of LLMs for future research in multi-agent\ncooperation. Videos can be found on the project website\nhttps://vis-www.cs.umass.edu/Co-LLM-Agents/.", + "authors": "Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan", + "published": "2023-07-05", + "updated": "2024-02-17", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.12945v1", + "title": "3D-GPT: Procedural 3D Modeling with Large Language Models", + "abstract": "In the pursuit of efficient automated content creation, procedural\ngeneration, leveraging modifiable parameters and rule-based systems, emerges as\na promising approach. Nonetheless, it could be a demanding endeavor, given its\nintricate nature necessitating a deep understanding of rules, algorithms, and\nparameters. To reduce workload, we introduce 3D-GPT, a framework utilizing\nlarge language models~(LLMs) for instruction-driven 3D modeling. 3D-GPT\npositions LLMs as proficient problem solvers, dissecting the procedural 3D\nmodeling tasks into accessible segments and appointing the apt agent for each\ntask. 3D-GPT integrates three core agents: the task dispatch agent, the\nconceptualization agent, and the modeling agent. They collaboratively achieve\ntwo objectives. First, it enhances concise initial scene descriptions, evolving\nthem into detailed forms while dynamically adapting the text based on\nsubsequent instructions. Second, it integrates procedural generation,\nextracting parameter values from enriched text to effortlessly interface with\n3D software for asset creation. Our empirical investigations confirm that\n3D-GPT not only interprets and executes instructions, delivering reliable\nresults but also collaborates effectively with human designers. Furthermore, it\nseamlessly integrates with Blender, unlocking expanded manipulation\npossibilities. Our work highlights the potential of LLMs in 3D modeling,\noffering a basic framework for future advancements in scene generation and\nanimation.", + "authors": "Chunyi Sun, Junlin Han, Weijian Deng, Xinlong Wang, Zishan Qin, Stephen Gould", + "published": "2023-10-19", + "updated": "2023-10-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.03828v2", + "title": "Occupancy Networks: Learning 3D Reconstruction in Function Space", + "abstract": "With the advent of deep neural networks, learning-based approaches for 3D\nreconstruction have gained popularity. However, unlike for images, in 3D there\nis no canonical representation which is both computationally and memory\nefficient yet allows for representing high-resolution geometry of arbitrary\ntopology. Many of the state-of-the-art learning-based 3D reconstruction\napproaches can hence only represent very coarse 3D geometry or are limited to a\nrestricted domain. In this paper, we propose Occupancy Networks, a new\nrepresentation for learning-based 3D reconstruction methods. Occupancy networks\nimplicitly represent the 3D surface as the continuous decision boundary of a\ndeep neural network classifier. In contrast to existing approaches, our\nrepresentation encodes a description of the 3D output at infinite resolution\nwithout excessive memory footprint. We validate that our representation can\nefficiently encode 3D structure and can be inferred from various kinds of\ninput. Our experiments demonstrate competitive results, both qualitatively and\nquantitatively, for the challenging tasks of 3D reconstruction from single\nimages, noisy point clouds and coarse discrete voxel grids. We believe that\noccupancy networks will become a useful tool in a wide variety of\nlearning-based 3D tasks.", + "authors": "Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, Andreas Geiger", + "published": "2018-12-10", + "updated": "2019-04-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.05171v4", + "title": "ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding", + "abstract": "The recognition capabilities of current state-of-the-art 3D models are\nlimited by datasets with a small number of annotated data and a pre-defined set\nof categories. In its 2D counterpart, recent advances have shown that similar\nproblems can be significantly alleviated by employing knowledge from other\nmodalities, such as language. Inspired by this, leveraging multimodal\ninformation for 3D modality could be promising to improve 3D understanding\nunder the restricted data regime, but this line of research is not well\nstudied. Therefore, we introduce ULIP to learn a unified representation of\nimages, texts, and 3D point clouds by pre-training with object triplets from\nthe three modalities. To overcome the shortage of training triplets, ULIP\nleverages a pre-trained vision-language model that has already learned a common\nvisual and textual space by training with massive image-text pairs. Then, ULIP\nlearns a 3D representation space aligned with the common image-text space,\nusing a small number of automatically synthesized triplets. ULIP is agnostic to\n3D backbone networks and can easily be integrated into any 3D architecture.\nExperiments show that ULIP effectively improves the performance of multiple\nrecent 3D backbones by simply pre-training them on ShapeNet55 using our\nframework, achieving state-of-the-art performance in both standard 3D\nclassification and zero-shot 3D classification on ModelNet40 and ScanObjectNN.\nULIP also improves the performance of PointMLP by around 3% in 3D\nclassification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1\naccuracy for zero-shot 3D classification on ModelNet40. Our code and\npre-trained models are released at https://github.com/salesforce/ULIP.", + "authors": "Le Xue, Mingfei Gao, Chen Xing, Roberto Mart\u00edn-Mart\u00edn, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, Silvio Savarese", + "published": "2022-12-10", + "updated": "2023-06-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1808.09351v4", + "title": "3D-Aware Scene Manipulation via Inverse Graphics", + "abstract": "We aim to obtain an interpretable, expressive, and disentangled scene\nrepresentation that contains comprehensive structural and textural information\nfor each object. Previous scene representations learned by neural networks are\noften uninterpretable, limited to a single object, or lacking 3D knowledge. In\nthis work, we propose 3D scene de-rendering networks (3D-SDN) to address the\nabove issues by integrating disentangled representations for semantics,\ngeometry, and appearance into a deep generative model. Our scene encoder\nperforms inverse graphics, translating a scene into a structured object-wise\nrepresentation. Our decoder has two components: a differentiable shape renderer\nand a neural texture generator. The disentanglement of semantics, geometry, and\nappearance supports 3D-aware scene manipulation, e.g., rotating and moving\nobjects freely while keeping the consistent shape and texture, and changing the\nobject appearance without affecting its shape. Experiments demonstrate that our\nediting scheme based on 3D-SDN is superior to its 2D counterpart.", + "authors": "Shunyu Yao, Tzu Ming Harry Hsu, Jun-Yan Zhu, Jiajun Wu, Antonio Torralba, William T. Freeman, Joshua B. Tenenbaum", + "published": "2018-08-28", + "updated": "2018-12-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "eess.IV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1802.05384v3", + "title": "AtlasNet: A Papier-M\u00e2ch\u00e9 Approach to Learning 3D Surface Generation", + "abstract": "We introduce a method for learning to generate the surface of 3D shapes. Our\napproach represents a 3D shape as a collection of parametric surface elements\nand, in contrast to methods generating voxel grids or point clouds, naturally\ninfers a surface representation of the shape. Beyond its novelty, our new shape\ngeneration framework, AtlasNet, comes with significant advantages, such as\nimproved precision and generalization capabilities, and the possibility to\ngenerate a shape of arbitrary resolution without memory issues. We demonstrate\nthese benefits and compare to strong baselines on the ShapeNet benchmark for\ntwo applications: (i) auto-encoding shapes, and (ii) single-view reconstruction\nfrom a still image. We also provide results showing its potential for other\napplications, such as morphing, parametrization, super-resolution, matching,\nand co-segmentation.", + "authors": "Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, Mathieu Aubry", + "published": "2018-02-15", + "updated": "2018-07-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.06395v2", + "title": "An Inverse Procedural Modeling Pipeline for SVBRDF Maps", + "abstract": "Procedural modeling is now the de facto standard of material modeling in\nindustry. Procedural models can be edited and are easily extended, unlike\npixel-based representations of captured materials. In this paper, we present a\nsemi-automatic pipeline for general material proceduralization. Given\nSpatially-Varying Bidirectional Reflectance Distribution Functions (SVBRDFs)\nrepresented as sets of pixel maps, our pipeline decomposes them into a tree of\nsub-materials whose spatial distributions are encoded by their associated mask\nmaps. This semi-automatic decomposition of material maps progresses\nhierarchically, driven by our new spectrum-aware material matting and\ninstance-based decomposition methods. Each decomposed sub-material is\nproceduralized by a novel multi-layer noise model to capture local variations\nat different scales. Spatial distributions of these sub-materials are modeled\neither by a by-example inverse synthesis method recovering Point Process\nTexture Basis Functions (PPTBF) or via random sampling. To reconstruct\nprocedural material maps, we propose a differentiable rendering-based\noptimization that recomposes all generated procedures together to maximize the\nsimilarity between our procedural models and the input material pixel maps. We\nevaluate our pipeline on a variety of synthetic and real materials. We\ndemonstrate our method's capacity to process a wide range of material types,\neliminating the need for artist designed material graphs required in previous\nwork. As fully procedural models, our results expand to arbitrary resolution\nand enable high level user control of appearance.", + "authors": "Yiwei Hu, Chengan He, Valentin Deschaintre, Julie Dorsey, Holly Rushmeier", + "published": "2021-09-14", + "updated": "2021-09-27", + "primary_cat": "cs.GR", + "cats": [ + "cs.GR", + "I.3" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.15997v1", + "title": "RoCar: A Relationship Network-based Evaluation Method to Large Language Models", + "abstract": "Large language models (LLMs) have received increasing attention. However, due\nto the complexity of its capabilities, how to rationally evaluate the\ncapabilities of LLMs is still a task to be solved. We propose the RoCar method,\nwhich utilizes the defined basic schemas to randomly construct a task graph and\ngenerates natural language evaluation tasks based on the task graph to evaluate\nthe reasoning and memory abilities of LLMs respectively. Due to the very large\nrandomness of the task construction process, it is possible to ensure that none\nof the LLMs to be tested has directly learned the evaluation tasks,\nguaranteeing the fairness of the evaluation method.", + "authors": "Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang", + "published": "2023-07-29", + "updated": "2023-07-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.11033v4", + "title": "FAIR Enough: How Can We Develop and Assess a FAIR-Compliant Dataset for Large Language Models' Training?", + "abstract": "The rapid evolution of Large Language Models (LLMs) highlights the necessity\nfor ethical considerations and data integrity in AI development, particularly\nemphasizing the role of FAIR (Findable, Accessible, Interoperable, Reusable)\ndata principles. While these principles are crucial for ethical data\nstewardship, their specific application in the context of LLM training data\nremains an under-explored area. This research gap is the focus of our study,\nwhich begins with an examination of existing literature to underline the\nimportance of FAIR principles in managing data for LLM training. Building upon\nthis, we propose a novel framework designed to integrate FAIR principles into\nthe LLM development lifecycle. A contribution of our work is the development of\na comprehensive checklist intended to guide researchers and developers in\napplying FAIR data principles consistently across the model development\nprocess. The utility and effectiveness of our framework are validated through a\ncase study on creating a FAIR-compliant dataset aimed at detecting and\nmitigating biases in LLMs. We present this framework to the community as a tool\nto foster the creation of technologically advanced, ethically grounded, and\nsocially responsible AI models.", + "authors": "Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya", + "published": "2024-01-19", + "updated": "2024-04-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + } + ], + [ + { + "url": "http://arxiv.org/abs/2404.14527v1", + "title": "M\u00e9lange: Cost Efficient Large Language Model Serving by Exploiting GPU Heterogeneity", + "abstract": "Large language models (LLMs) are increasingly integrated into many online\nservices. However, a major challenge in deploying LLMs is their high cost, due\nprimarily to the use of expensive GPU instances. To address this problem, we\nfind that the significant heterogeneity of GPU types presents an opportunity to\nincrease GPU cost efficiency and reduce deployment costs. The broad and growing\nmarket of GPUs creates a diverse option space with varying costs and hardware\nspecifications. Within this space, we show that there is not a linear\nrelationship between GPU cost and performance, and identify three key LLM\nservice characteristics that significantly affect which GPU type is the most\ncost effective: model request size, request rate, and latency service-level\nobjective (SLO). We then present M\\'elange, a framework for navigating the\ndiversity of GPUs and LLM service specifications to derive the most\ncost-efficient set of GPUs for a given LLM service. We frame the task of GPU\nselection as a cost-aware bin-packing problem, where GPUs are bins with a\ncapacity and cost, and items are request slices defined by a request size and\nrate. Upon solution, M\\'elange derives the minimal-cost GPU allocation that\nadheres to a configurable latency SLO. Our evaluations across both real-world\nand synthetic datasets demonstrate that M\\'elange can reduce deployment costs\nby up to 77% as compared to utilizing only a single GPU type, highlighting the\nimportance of making heterogeneity-aware GPU provisioning decisions for LLM\nserving. Our source code is publicly available at\nhttps://github.com/tyler-griggs/melange-release.", + "authors": "Tyler Griggs, Xiaoxuan Liu, Jiaxiang Yu, Doyoung Kim, Wei-Lin Chiang, Alvin Cheung, Ion Stoica", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "2.1 LLM Inference Optimization A significant body of research has focused on optimizing LLM inference. One stream concentrates on memory optimization, particularly through improved key-value cache reuse [54] and management strategies [21]. Another avenue seeks to minimize latency, such as scheduling optimization [51, 1, 47], speculative decoding [22], and kernel optimization [8, 42]. Additional optimizations include quantization [10, 23, 50] and sparsification [9]. Instead of altering inference logic, our work assumes a fixed inference engine configuration and concentrates on reducing LLM deployment costs by choosing cost-effective GPU instance types. 2.2 Machine Learning with Cloud Resources Recent studies have explored various strategies for reducing the cost of machine learning (ML) inference or training. Several focus on utilizing spot instances [43, 15, 52, 13] are complementary to our work. Other work targets deployment on heterogeneous resources [5, 6, 31, 28, 26], but focuses primarily on model training rather than serving. Leveraging serverless instances for inference cost reduction has been examined in [2]. Nonetheless, prior work predominantly concentrates on machine learning prior to the advent of LLMs, which we show to have unique characteristics that significantly impact cost efficiency. More recent studies, such as [27, 18], focus on LLMs, but they propose strategies for reducing costs via optimal migration plans and parallelism with heterogeneous resources. They do not identify the key LLM service characteristics that impact cost efficiency, which our work highlights. Another line of work [56, 38] explores splitting LLM inference into its two phases (prefill and decode) and performing the two phases on separate nodes, perhaps with different GPU types. Our work shows that, even within a phase, the best GPU type can change based on LLM service specifications.", + "pre_questions": [], + "main_content": "Introduction Large language models (LLMs) like GPT-4 [37] and the Llama model family [44, 45] are increasingly integrated into many online services, including search engines [39, 25], chatbots [36], and virtual assistants [29, 48, 49]. However, a significant obstacle in deploying LLM services is their high operational costs. The substantial size and computational demands of LLMs require the use of hardware accelerators, typically GPUs1, to achieve high-performance inference. Unfortunately, GPUs are expensive. For example, renting just a single on-demand NVIDIA A100 on a major cloud provider costs over $2, 600 per month, and many services require multiple A100s to serve especially large models or request volumes. Prior work [51, 54, 21] has introduced methods for increasing inference throughput to squeeze ever more performance out of expensive GPUs. However, less attention has been given to choosing the best GPU type(s) to use for a given LLM service. The broad and growing market of hardware accelerators, including NVIDIA GPUs [35], AMD GPUs [46], Google TPUs [20], CPUs [24], and more [4], creates a diverse option space with a wide range of hardware specifications and rental prices. Table 1 depicts the specs of just four NVIDIA GPUs, which already exhibits a large variety of costs and performance. Within this option space, we find that there is not a linear relationship between GPU cost and performance, which creates variations in GPU \u2217Equal contribution. 1For brevity, we use \u201caccelerator\u201d and \u201cGPU\u201d interchangeably in this work. 1 arXiv:2404.14527v1 [cs.DC] 22 Apr 2024 cost efficiency, defined based on common pricing models [36] as the sum of input and output tokens served per GPU dollar cost (T/$). Instead, we show that a GPU\u2019s cost efficiency is strongly impacted by three key LLM service characteristics: 1. Request Size: An LLM request\u2019s size is made up of its input and output token lengths. For small request sizes, we find that lower-end GPUs produce greater T/$ than their high-end GPU counterparts. Deployment expenses can be reduced by employing cheaper GPUs for smaller requests while reserving costly, high-capacity GPUs for handling larger request sizes. 2. Request Rate: To reduce resource waste, provisioned GPU capacity should align with request volume. An expensive under-utilized GPU exhibits lower T/$ than a cheaper GPU that still meets service demand. Therefore, at low request rates, services can reduce costs by right-sizing from expensive high-end GPUs to cheap low-end GPUs. At higher request rates, leveraging a mix of GPU types facilitates finer-grained resource scaling to better match request volume. 3. Service-level Objective: Services typically establish latency SLOs to ensure high service quality, with the specific SLO varying according to the service\u2019s interactivity needs. In general, low-end GPUs incur higher latency than high-end GPUs. As a result, low-end GPUs may only meet tight SLOs at a low output token rate (or not at all), severely limiting achieved T/$. Thus, high-end GPUs are often required for stringent latency SLOs, whereas low-end GPUs can reduce costs in loose-SLO settings. Consequently, we find that the under-appreciated heterogeneity of GPUs presents opportunities for increasing GPU cost efficiency and significantly reducing LLM service costs. Consider combining the three observations above into a single service deployment: high-cost A100s may be necessary to serve large requests within SLO requirements, however, low-cost A10Gs can meet the latency deadline for small requests at higher T/$, reducing overall cost. Then, during periods of low service activity, the even cheaper L4 can maintain service availability at the lowest cost. The key challenge, then, is to navigate the diversity of request sizes, request rates, latency SLOs, and GPU instance types to find the optimal GPU selection for a given LLM service. In this paper, we present M\u00b4 elange2, a framework that maximizes GPU cost efficiency by automatically and efficiently navigating the heterogeneity of GPUs and LLM service specifications to derive the best GPU provisioning strategy. M\u00b4 elange\u2019s strength stems from its heterogeneity-awareness, that is, its knowledge of how diverse LLM service characteristics impact the cost efficiency of each GPU type. M\u00b4 elange takes as input the service workload profile, latency SLO, and set of GPU type options, and produces the GPU allocation that minimizes deployment costs while attaining SLO. We formulate the task of GPU selection as a cost-aware bin-packing problem where bins are GPUs with an associated capacity and cost, and items are request slices defined by a request size and rate, and solve the bin-packing problem with an off-the-shelf integer linear programming (ILP) solver. M\u00b4 elange can be easily extended to include new GPU types (or other hardware) and alternative definitions of SLO, flexibly supporting diverse LLM service deployments. We evaluate M\u00b4 elange across four GPU types (L4, A10G, A100, and H100), three datasets with varying request size distributions, and a range of request rates and SLOs. Compared to using only a single GPU type, M\u00b4 elange\u2019s heterogeneity-aware mixed-GPU-type approach achieves 9-77% cost reduction in short-context workloads (interactive chats), 2-33% in long-context workloads (document-based tasks), and 4-51% in mixedcontext workloads (both in a single service). M\u00b4 elange efficiently derives GPU allocations within 1.2 seconds, and attains SLO for > 99.95% of requests at a loose SLO, and > 99.5% at a tight SLO. In summary, this paper makes the following contributions: \u2022 We present an extensive analysis of GPU cost efficiency and identify three key LLM service characteristics as significant determinants of GPU cost efficiency: request size, request rate, and latency SLO (\u00a7 4). \u2022 We introduce M\u00b4 elange to efficiently select the most cost-efficient set of GPU instances for a given LLM deployment, while ensuring that the resulting allocation satisfies a prescribed SLO requirement (\u00a7 5). \u2022 We evaluate M\u00b4 elange\u2019s efficacy, demonstrating its significant cost reductions (up to 77%) across a range of real-world workloads, GPU types, and SLO constraints (\u00a7 6). 2 Type L4 A10G (PCIe) A100-80G (SXM) H100 (SXM) On-demand Price ($/h) 0.7 1.01 3.67 7.5163 Instance Provider GCP AWS Azure RunPod Instance Name g2-standard-4 g5.xlarge NC24ads A100 v4/N.A. N.A. Memory (GB) 24 24 80 80 Memory Bandwidth (GB/s) 300 600 1935 3350 FP16 (TFLOPS) 242 125 312 1979 Table 1: Specifications of four NVIDIA GPUs: L4, A10G, A100, and H100. 3.1 LLM Request Size Variance Unlike traditional machine learning workloads, LLM tasks exhibit significant variance in request sizes, or input and output lengths. For example, ResNet [16] requires a fixed-dimension input (image size) and results in a fixed-dimension output (classification size). Conversely, transformer-based language models are flexible to support variable-length prompts and produce variable-length generation sequences, as in the Chatbot Arena dataset [53] derived from a real-world LLM chatbot service. Figure 10 illustrates the request size distributions of Chatbot Arena, demonstrating the extensive diversity of request sizes in practical scenarios. Unsurprisingly, high variance in request sizes introduces significant variation in request latency. As illustrated in Figure 1, request latency can increase by 110\u00d7 when the input/output length expands from 25 tokens to 2000 tokens for the Llama2-7B model served on an A100 GPU. Consequently, it becomes crucial to recognize that LLM requests, unlike non-transformer models, impose varied loads on GPU resources. 2M\u00b4 elange is the French word for \u201cmixture\u201d 3H100\u2019s hourly pricing was computed as described in the Hardware section above. 3 (a) LLaMA-7B 85X (b) LLaMA-70B Figure 1: Request latency of different input/output lengths on A100-80G. 3.2 Unknown Output Length In most online services, an LLM request\u2019s output length is not known a priori. In this paper, we evaluate GPU cost efficiency based on both input and output lengths. We do this to develop a holistic understanding of GPU cost efficiency, but M\u00b4 elange\u2019s GPU provisioning decision does not require specific knowledge of the output lengths of individual requests. Instead, it relies only on an estimated distribution of request sizes. We believe it is a fair assumption that a service\u2019s GPU allocator is given a distribution of expected request sizes based on the historical data of previously served input and output lengths. Because output lengths are a significant contributor to the load of individual requests, unknown output lengths are primarily a challenge for the load balancer, not the allocator. While important, the task of output length prediction for load balancing is orthogonal to M\u00b4 elange. Therefore, to evaluate the efficacy of M\u00b4 elange\u2019s GPU allocations, we use a load balancer that assumes knowledge of output lengths. We are actively working to remove this assumption by exploring load balancers based on output length prediction. There are several prior works that perform online LLM output length prediction with high accuracy [19, 55], but they have not been applied to load balancing. To the best of our knowledge, there is no load balancer that addresses the problem of unknown output lengths, and we believe this to be an promising area of future work. 4 GPU Cost Efficiency Analysis In this section, we analyze GPU cost efficiency in the context of LLM serving. We first describe our key definitions (\u00a7 4.1), then evaluate the cost efficiency of serving Llama2-7b on two widely used GPUs, NVIDIA\u2019s A100 [34] and A10G [33] to show that GPU cost efficiency is significantly influenced by request size(\u00a7 4.2), request latency SLO(\u00a7 4.3), and request rate(\u00a7 4.4). Finally, we validate the generality of our findings by extending our investigation to include additional hardware, specifically NVIDIA\u2019s H100 and L4 GPUs, and a larger model variant, Llama2-70B (\u00a7 4.5). For clarity, the plots are tagged with the request size, request rate, and SLO used to generate the plot. In each setting, we use vLLM-0.2.7 as the inference engine [21]. Results can differ across versions. 4.1 Definitions Service-level Objective (SLO). As in prior work [21, 56, 51], we use the average Time Per Output Token (TPOT) as our Service-level Objective (SLO). Average TPOT is determined by dividing total request latency by the number of generated tokens. In general, SLOs are application dependent: in-line code editors (e.g., GitHub Copilot [29]) require tight latency deadlines to suggest real-time code additions, whereas text summarization services may permit additional processing time to generate concise and accurate summaries for large documents. There are other popular definitions of SLO, such as time to first token and total request 4 (a) Equivalent input and output lengths (b) Input and output lengths vary independently Figure 2: Figure (a) depicts A10G and A100\u2019s relative T/$ across request sizes. Figure (b) expands (a) into separate input and output length dimensions. Tile colors indicate which GPU achieved higher T/$, and values represent the most cost efficient GPU\u2019s percent increase of T/$ relative to the less cost efficient GPU. latency. To simplify our discussion, we use only TPOT, however, M\u00b4 elange is flexible to support alternative definitions of SLO. Cost Efficiency Metric. We use tokens per dollar (T/$) to measure GPU cost efficiency, which is calculated by summing input and output token lengths served within some time period, and dividing the sum by the GPU\u2019s rental cost for the same period. The resulting value enables us to directly compare cost efficiency across GPU instance types with different rental costs. Pricing inference based on token lengths is a common practice in LLM services [36, 12], but some services set different prices for input and output tokens. We only compare T/$ between GPUs in settings where the request sizes are the same, so we do not lose generality to such cost models. In settings where request sizes differ, we report the overall cost of the GPU allocation that meets the aggregate workload. In general, we derive T/$ based on profiling a GPU at maximum saturation. When an SLO is specified, T/$ is calculated by finding the highest GPU saturation at which average TPOT still meets the SLO requirement. 4.2 Request Size and Cost Efficiency We now show that request sizes, shown to be widely varying (\u00a7 3.1), dramatically affect GPU cost efficiency. We served Llama2-7b on A100 and A10G (specifications reported in Table 1), and derived each GPU\u2019s T/$ at maximum GPU saturation across a range of request sizes, with results in Figure 2a. Interestingly, neither GPU achieves highest T/$ across the entire request size space. Instead, each GPU achieves greater cost efficiency within distinct regions of the request size spectrum. For smaller request sizes, A10G exhibits up to 2.6\u00d7 greater T/$ than A100. Conversely, for larger request sizes, A100 achieves up to 1.5\u00d7 the cost efficiency of A10G. We extend this exploration to include both input and output lengths in Figure 2b to observe how they affect cost efficiency separately. We find that the two dimensions influence cost efficiency in a similar manner: smaller sizes benefit A10G, and larger sizes are best served on A100. Once again, there exists a clear boundary within the input/output length spectrum where the cost efficiency advantage shifts from A10G to A100 as request sizes increase. In fact, selecting a single GPU type to serve requests across the entire request size space misses opportunities to produce up to 72% more output tokens for the same cost. Source of Cost Efficiency Variation. Digging deeper into why request size impacts relative cost efficiency between GPUs, we find that it is largely due to the heterogeneity of GPU hardware. Given that batch size directly influences throughput (i.e., request processing rate), we inspect the source of cost efficiency variation by examining the effect of request size on achieved batch size. Figure 3 depicts the absolute batch sizes and 5 (a) Absolute batch sizes (b) Dollar-normalized batch sizes Figure 3: (a) depicts the absolute batch sizes of A10G and A100 serving Llama2-7b at maximum saturation, (b) reports the same batch sizes divided by GPU cost, plotting with respect to A10G. batch sizes normalized by instance cost of each GPU at maximum saturation. Note that Figure 3b closely resembles Figure 2a\u2019s plot of relative T/$ at maximum saturation, verifying that batch size indeed serves as a proxy for throughput. A10G and A100 have similar dollar-normalized batch sizes at 250 input/output tokens, but as the request size increases to 2000 input/output tokens, A10G\u2019s absolute batch size decreases by a factor of 9\u00d7 whereas A100\u2019s only decreases by 6\u00d7 due to its superior memory size and bandwidth. As a result, A100\u2019s cost efficiency advantage over A10G increases accordingly with the increase in request size. In contrast, reducing the request size from 250 to 25 input/output tokens sees A10G\u2019s batch size expanding by 15.2\u00d7, whereas A100\u2019s growth is more modest at 5.89\u00d7. We find that this difference is primarily due to the interference of mixing prefill and decode phases of a greater number of requests, as demonstrated in prior work [17]. Because A100\u2019s batch sizes are larger in absolute terms, A100 is more significantly constrained by per-request latency overheads than A10G is. As a result, A10G\u2019s dollar-normalized batch size exceeds A100\u2019s at short request lengths, leading to greater overall T/$ for A10G. This illustrative case demonstrates how the interaction between request size and achieved T/$ can be subtle, and creates a cost efficiency trade-off space among GPU types. Key Takeaways: GPU cost efficiency is highly dependent on the sizes of requests served. Within the request size space, there are regions where serving with different GPU types is the most cost-effective. In general, lower-end GPUs are more cost-effective for small request sizes whereas higher-end GPUs are best for large request sizes. 4.3 SLO and Cost Efficiency In this section, we show the impact of SLO on cost efficiency. We measure T/$ by finding the maximum saturation of each GPU while average TPOT remains below SLO, and repeat this across several TPOT deadlines (40ms to 120ms) as shown in Figure 4. Under tight SLO constraints (<60ms), A100 demonstrates significantly greater T/$ than A10G (> 2\u00d7) due to A10G\u2019s higher processing latency, which severely limits its output token rate. However, as the SLO is gradually loosened (60-120ms), A10G\u2019s higher latency is less problematic, dramatically increasing its T/$ and surpassing that of A100 (by > 40%). In general, when SLO is stringent, high-end low-latency GPUs are the most viable option because cheaper high-latency GPUs are unable to meet the steep performance requirements. Loosening the SLO increasingly permits the use of cheaper GPUs that can meet the reduced performance requirements at much lower cost. Further, Figure 5 highlights the interplay between SLO and request size to show that neither can be considered in isolation when determining cost efficiency. Varying the latency SLO adjusts the boundary in the request size space between which different GPU types are more cost effective, and also impacts the degree to which 6 Figure 4: T/$ comparison between A10G and A100 across a range of TPOT SLO parameters. Figure 5: Relative increase in T/$ when combining SLO and request size. Shaded areas indicate regions where A10G fails to satisfy the specified SLO. one GPU is more cost effective than the other. For example, with a 40-50ms SLO, A100 always has higher T/$ (by up to 123%). At 70ms, A10G shows modest benefit over A100 for small request sizes. And at 100-120ms, A10G demonstrates much greater T/$ advantage over A100 for the same request sizes (up to 61%). Key Takeaways: To comply with strict SLOs, expensive GPUs are often necessary due to the increased latency of cheaper GPUs. However, as SLO is loosened, low-end GPUs can be used to cut deployment costs. 4.4 Request Rate and Cost Efficiency In this section, we show how request rates influence which GPU, or set of GPUs, is the most cost-effective. Figure 6 illustrates the cost of serving Llama2-7b for a range of request rates using three provisioning strategies: only A10Gs, only A100s, or a mix of both A10Gs and A100s. The y-axis is absolute cost instead of T/$ because each provisioning strategy serves the same request rates and thus the same output tokens; only the cost varies across strategies. As the request rate increases, A100-only is increasingly more cost-effective relative to A10G-only. This is because the requests in this plot were of size [1000 in tokens, 250 out tokens], which \u00a7 4.2 shows is more cost effective on A100. However, even in this case, the A10G-only strategy still presents benefits at low request rates (0 \u22121.5 req/s). Idle periods of low activity are common in real-world services, and the GPU deployment should right-size to the cheaper GPU (here, A10G) when a higher-end GPU (here, A100) is drastically under-utilized. Further, a notable finding is that a hybrid deployment approach, combining both A10G and A100 GPUs, yields the greatest cost efficiency. Because A100s have such large capacity, scaling with only A100s is coarse-grained and leads to under-utilized resources. Instead, A10Gs and A100s can be mixed such that A100s satisfy the bulk of the service demands, while A10Gs handle the remaining load at reduced cost. Key Takeaways: Provisioning a mix of GPU types enables finer-grained resource scaling decisions, which boosts cost efficiency by better utilization of the provisioned instances. At low request rates, LLM deployments should right-size to cheaper low-end GPUs instead of under-utilizing expensive high-capacity GPUs. At higher request rates, a mix of GPU types can be used to better match request load. 4.5 Other Models and Hardware In this section, we demonstrate the generality of our findings by including additional GPU types and a larger model variant (Llama2-70b) to our analysis. In Figure 8, we present relative cost efficiency across four types 7 of GPUs, and observe a progression of the most cost efficient GPU from L4 to A10G, then A100, and finally H100 as the input/output lengths extend. This pattern underscores the advantage of high-end GPUs for processing longer context and output lengths, while low-end GPUs emerge as more cost-effective for shorter input/output scenarios. Similar trends are observed with the Llama2-70B model when comparing the H100 and A100 GPUs, as detailed in Figure 7, reinforcing these insights. Key Takeaways: The effects of request size on GPU cost efficiency (\u00a7 4.2) generalize to settings with several GPU types and larger model sizes, and similarly leads to significant GPU cost efficiency variations in the request size space. Figure 6: Aggregate GPU hourly rental cost at different request rates. A mix of A100 and A10G consistently achieves the lowest cost. Figure 7: T/$ comparison between H100x2 and A100x2 serving Llama2-70b. 5 M\u00b4 elange: Automating Cost-Efficient GPU Selection Building on the analysis in Section 4, we present M\u00b4 elange, a framework that automates the selection of GPU instances to meet an LLM service\u2019s demand at minimal cost while adhering to SLO constraints. We frame the GPU selection task as a cost-aware bin-packing problem with GPUs as bins and requests as items, and employ Integer Linear Programming (ILP) to derive the solution. 5.1 Problem Formulation We begin by defining the key terms utilized in our problem formulation and solution. Workload: A workload is characterized by its overall request rate along with a distribution of input and output sizes. Given the inherent variability in request sizes, it is crucial to treat the input and output sizes not as fixed values, but as distributions spanning a range of possible lengths. Specifically, as illustrated in Figure 9, a workload is a histogram where each bucket corresponds to a range of request sizes and a bucket\u2019s value is the request rate of requests within the bucket\u2019s size range. Deployment Cost: Cost is computed by summing the hourly rates for each of the selected instances. SLO: We use average TPOT to define SLO, however, M\u00b4 elange can be extended to other definitions of SLO, such as time to first token (TTFT), by profiling maximum T/$ within SLO constraints described in \u00a7 4.1 for any given latency constraint definition. Problem Definition: Given a workload, GPU instance costs, and SLO requirements, our objective is to provision GPUs that can serve the workload at minimal cost while adhering to latency SLO constraints. 8 (a) Best GPU relative to second best GPU (b) Best GPU relative to worst GPU Figure 8: Cost efficiency comparison across four GPUs. Tile colors indicate which GPU achieves greatest T/$ at max saturation for the respective request size. Tile values in (a) are the percent increase in T/$ of the best GPU compared to the second best. Tile values in (b) compare the best GPU to the worst GPU. Black boxes indicate request sizes for which only A100 and H100 are compared because A10G and L4 have too small memory capacity to handle a single request within this size, with more detail in \u00a7 6.2 . 3x A10G 2x A100 1x H100 Obj: Minimize Cost Constraint: Meet SLO Figure 9: Workflow illustration depicting the process of segmenting request rates into slices, followed by the allocation of hardware resources based on solver recommendations. 5.2 Allocation Algorithm The intuition of M\u00b4 elange\u2019s solution is to find the minimal-cost set of GPUs (bins) into which the workload (items) can be bin-packed. To do so, our strategy partitions workload buckets into slices, then assigns the slices to GPUs. Our constraints ensure that the load added to each GPU by the assigned slices does not surpass its maximum capacity. The optimization objective is to reduce the total deployment cost. We discuss bucket size considerations (\u00a7 5.2.1), describe slices in more detail (\u00a7 5.2.2), discuss how load is calculated (\u00a7 5.2.3), then finally detail our ILP formulation (\u00a7 5.2.4). 5.2.1 Request Buckets As described in \u00a7 5.1, a workload is represented by a histogram. The histogram has two dimensions, input length and output length, and each bucket\u2019s value is the aggregate request rate for requests within the bucket\u2019s size range. We make the simplifying assumption that the load (see \u00a7 5.2.2) of each request is the same as the largest request size in the same bucket. This simplifies handling diverse request sizes at the cost of over-estimating the load. Bucket sizes can be tuned to reach the desired balance between granularity and solution complexity, but we have not found overall performance to be sensitive to bucket sizes. 9 5.2.2 Slices A naive bin-packing of the workload into GPUs is to assign each bucket to a single GPU. However, the overall load of a single bucket may exceed the capacity of a single GPU, and the bucket may be most cost effectively served by splitting across different GPU types. Therefore, for finer-grained bin-packing, buckets are broken down into slices, which are characterized by a request size and rate. A parameter, slice factor, indicates the number of slices that each bucket is divided into. In a setting with a slice factor of 8 and a bucket corresponding to requests of size [25 \u2212100 in tokens, 25 \u2212100 out tokens] with a request rate of 4 requests/s, the bucket would be segmented into 8 slices each corresponding to a request rate of 0.5 requests. 5.2.3 Load The ILP solver requires an estimate of the load each slice contributes to a GPU to ensure that the load assigned to an instance does not exceed its capacity and violate the latency SLO. The load of a slice with request size s and rate r on GPU G is calculated as r MaxT put(G,s,SLO), where MaxTput(G, s, SLO) is the maximum request/s G can achieve for requests of size s while remaining under the latency deadline SLO. For instance, if MaxTput(G, s, SLO) = 10 reqs/s and r = 1, the load is calculated as 1/10 = 0.1 or 10%. Each GPU\u2019s maximum capacity is defined as 1 (or 100%). This approximation allows us to calculate the aggregate load of requests with differing sizes and rates. Prior work has proposed cost models for LLM requests [32, 41], but there is not yet a definitive formulation. We found our simple approximation to perform well, but it can be easily replaced with alternative cost models. Based on offline profiling, we compute MaxTput(G, s, SLO) for each bucket in the workload histogram. 5.2.4 ILP Formulation We now describe our ILP formulation. We formulate the problem with two primary decision variables. First, let A be a matrix {0, 1}N\u00d7M, where N denotes the total number of slices, and M represents the number of GPU instance types. An element Ai,j within this matrix is set to 1 if slice i is assigned to GPU type j, and 0 otherwise. The second decision variable, B, is a vector of length M, where each element Bj specifies the number of GPUs of type j to be allocated. cj denotes the cost of GPU type j and ri is the request rate of slice i. L is computed offline by the process described in \u00a7 5.2.3, and element Li,j is the percent load of 1 req/s of slice i\u2019s request size on GPU type j. Our objective is to minimize the total GPU allocation cost, with the following mathematical representation: The ILP constraints are as follows. First, each task slice is assigned to exactly one GPU type: Second, for each GPU type, the total number of GPUs designated in vector B must adequately accommodate the cumulative load prescribed to it in matrix A: Lastly, the entries within matrix A are binary, and the elements of vector B are non-negative integers: arg min B ( M X j=1 Bj \u00b7 cj) (1) \u2200i \u2208{1, . . . , N}, M X j=1 Ai,j = 1 (2) \u2200j \u2208{1, . . . , M}, N X i=1 Ai,j \u00b7 Li,j \u00b7 ri \u2264Bj (3) \u2200i \u2208{1, . . . , N}, \u2200j \u2208{1, . . . , M}, Ai,j \u2208{0, 1} (4) \u2200j \u2208{1, . . . , M}, Bj \u22650 (5) Upon resolving equations (1) through (5), the decision variable B holds the minimal-cost set of GPUs that meet the SLO constraint. We use an off-the-shelf solver to solve the ILP problem [30]. 10 6 Evaluation In this section, we assess the performance of M\u00b4 elange using four GPU types across settings of diverse request sizes, rates, and SLOs. Our evaluations show that M\u00b4 elange consistently achieves significant cost savings (up to 77%) compared to single-GPU-type strategies and M\u00b4 elange\u2019s selected GPU allocations successfully attain TPOT SLO for over 99.5% of requests. 6.1 Methodology Hardware Setup. We use four NVIDIA GPUs in our evaluations, with specifications detailed in Table 1. To determine GPU cost, we select the lowest on-demand price available from major cloud providers (AWS, Azure, and GCP). Because on-demand H100 is not offered by these major providers, we defer to the pricing from RunPod [40] due to its popularity and availability. To ensure fair cost comparisons, we normalize RunPod\u2019s H100 pricing to match the pricing structures of major platforms. We calculate this by comparing RunPod\u2019s H100 cost ($4.69) to RunPod\u2019s A100-80G cost ($2.29), then adjusting relative to the A100\u2019s price on major clouds ($3.67), resulting in a normalized price of (4.69/2.29) \u00d7 3.67 = $7.516 for H100. Model and Inference Engine. In each experiment, we serve Llama2-7B (Llama-2-7b-hf) [45] using version 0.2.7 of the vLLM inference engine [21] with default parameters. M\u00b4 elange Parameters. The bucket ranges correspond to Figure 8 and comprise of 10 input length ranges and 6 output length ranges, for a total of 60 buckets. The slice factor is set to 8, for a total of 60 \u00b7 8 = 480 slices. Datasets. We evaluate across three distinct datasets to cover a wide range of application scenarios. The specific input/output length distributions of these datasets are illustrated in Figure 10. \u2022 Short context: This scenario simulates real-time conversational dynamics by employing the Chatbot Arena dataset (lmsys/lmsys-chat-1m) [53], which is derived from real-world chatbot conversations. The dataset is skewed towards shorter context (< 2000 tokens) because much of the data was generated in conversation with models that did not yet have a larger context window. \u2022 Long context: This scenario represents tasks with extensive input, such as summarization. We utilize the PubMed dataset (ccdv/pubmed-summarization) [7], comprising 133 thousand scientific papers from PubMed.com, a popular dataset for large-scale text summarization studies. \u2022 Mixed long/short context: This scenario captures settings with a combination of long and short context, such as an assistant that engages in succinct dialogue and responds to large document-based queries. To model this, we create a synthetic dataset by sampling 80% of requests from the Arena dataset and 20% of requests from the PubMed dataset. SLOs. We referred to current LLM inference benchmarks [3] to set TPOT SLOs, and opted for 40ms in contexts where swift responses are essential, and 120ms where longer response times are acceptable. Both selected SLOs surpass the average human reading speed, ensuring that our SLOs align with practical user experience considerations. However, as discussed in \u00a7 4.1, M\u00b4 elange is flexible to support alternative definitions of SLO. Baselines. We benchmark against deployments that utilize solely one GPU type. To derive baseline GPU allocations, we use the same ILP formulation from \u00a7 5.2.4 but restrict the solver to only a single GPU type. 6.2 Cost Savings Analysis We first compare the overall deployment costs of M\u00b4 elange\u2019s allocation compared to the single-GPU-type baselines for each dataset and SLO across a range of request rates (1-32 requests/s). Figure 11 displays all costs normalized against the cost of the A100-only strategy (shown in blue dotted lines), and the detailed GPU allocations are included in Appendix A.1. A10G-only and L4-only provisioning strategies are only 11 0 2500 5000 7500 10000 12500 Input Length (tokens) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Fraction Dataset Mixed (mean=1278.04) Arena (mean=329.43) Pubmed (mean=4174.13) (a) Input length distributions. 0 250 500 750 1000 Output Length (tokens) 0.000 0.025 0.050 0.075 0.100 0.125 Fraction Dataset Mixed (mean=219.87) Arena (mean=195.66) Pubmed (mean=314.1) (b) Output length distributions. Figure 10: Dataset input and output length distributions. included in the Arena dataset analysis because of PubMed and Mixed datasets\u2019 large requests. The key-value cache generated from even a single large request (\u223c12000+ tokens) exceeds the memory capacity of L4 and A10G (24GB). In M\u00b4 elange\u2019s allocation, L4 and A10G are included but restricted to only serve requests of size less than 12000 tokens. Loose SLO: 120ms Figures 11a, 11c, and 11e depict results with a loose 120ms TPOT SLO. M\u00b4 elange\u2019s mixed-GPU allocation is consistently the most cost-efficient approach, achieving cost reductions of up to 77%, 33% and 51% across the three evaluated datasets. \u2022 Arena Dataset. In Figure 11a, M\u00b4 elange achieves 15-77% cost savings. Lower-tier GPUs such as A10G/L4 offer superior cost efficiency in comparison to A100/H100 when handling lower request rates. In particular, for 1-2 requests/s, H100 has egregiously high cost because the load is not enough to even saturate a single GPU. Yet, as request rate increases, A10G/L4\u2019s cost advantage diminishes as high-capacity GPUs become a more reasonable choice. This aligns with findings in \u00a7 4.4 that emphasize matching GPU size with request rate. Note, however, that A10G/L4 are still competitive with A100 at higher request rates due to their T/$ advantage for smaller request sizes. \u2022 PubMed Dataset. In Figure 11c, M\u00b4 elange achieves 15-33% cost savings. H100\u2019s cost efficiency generally outperforms A100\u2019s, attributable to the dataset\u2019s longer context lengths for which H100 achieves higher T/$. However, there are still many request sizes for which A100 is the best, and this creates the opportunity for the mixed-GPU strategy to squeeze up to 25% cost savings. \u2022 Mixed Dataset. In Figure 11e, M\u00b4 elange achieves 13-51% cost savings. A100\u2019s cost efficiency is boosted relative to the PubMed dataset due to it being generally more cost efficient than H100 for small request sizes. This distinction highlights how the nature of the workload \u2014 specifically the variance in request lengths \u2014 can significantly influence relative cost efficiency across GPU types. Strict SLO: 40ms Figures 11b, 11d, and 11f depict the results from tightening the TPOT SLO to 40ms. Once again, M\u00b4 elange achieves the lowest cost in all settings, with up to 68%, 22%, and 51% reduction across the three evaluated datasets. \u2022 Arena Dataset In Figure 11b, M\u00b4 elange achieves 9-68% cost savings. A10G/L4 display considerably higher relative cost than in the loose SLO setting (Figure 11a). This is explained by A10G/L4\u2019s higher latency, which requires many more instances to be provisioned in order to meet the tight SLO deadline. M\u00b4 elange\u2019s mixed-GPU strategy is able to adapt to the strict SLO and provision mostly A100/H100\u2019s which exhibit much lower latencies. \u2022 PubMed Dataset In Figure 11d, M\u00b4 elange achieves 2-22% cost savings. H100 achieves a significant cost advantage over A100, especially relative to the 120ms setting ( 11c). H100 generally achieves lower latency than A100, making it the preferred option for long-context tight-SLO settings. 12 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 A10G L4 Mix (a) Short context: Arena, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 3 Cost (w.r.t A100) H100 A100 A10G L4 Mix (b) Short context: Arena, SLO = 40ms. 1 2 4 8 16 32 Request Rate (req/s) 0.0 0.5 1.0 Cost (w.r.t A100) H100 A100 Mix (c) Long context: PubMed, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0.0 0.5 1.0 Cost (w.r.t A100) H100 A100 Mix (d) Long context: PubMed, SLO = 40ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 Mix (e) Mixed long/short context, SLO = 120ms. 1 2 4 8 16 32 Request Rate (req/s) 0 1 2 Cost (w.r.t A100) H100 A100 Mix (f) Mixed long/short context, SLO = 40ms. Figure 11: Deployment cost across different datasets and SLOs. \u2022 Mixed Dataset In Figure 11f, M\u00b4 elange achieves 4-51% cost savings. A100 gains back some advantage over H100 relative to the PubMed setting due to the prevalence of shorter-context requests. Experiment Takeaways In loose SLO settings, M\u00b4 elange can utilize all GPU types (both lowand high-end) to serve request sizes for which they achieve greatest T/$ and closely match capacity to the request volume, significantly reducing costs (up to 77%). In tight SLO settings, A10G and L4 are less beneficial due to their high latency, reducing the cost savings M\u00b4 elange can achieve relative to single-GPU-type strategies. However, even in this setting, M\u00b4 elange squeezes large cost savings (up to 67%) based on the same principles. These evaluations highlight the key benefits of exploiting GPU heterogeneity in a unified allocation strategy: 1) GPU types can serve request sizes for which they have greatest T/$, 2) mixing GPU types enables fine-grained provisioning to closely match capacity to request volume, and 3) the allocation strategy can adapt to differing SLO stringency levels and continue to utilize the benefits of (1) and (2). In summary, M\u00b4 elange efficiently navigates the diversity of request sizes, rates, SLOs, and GPU types to automatically find the best GPU allocation and significantly reduce deployment cost. 6.3 SLO Satisfaction Next, we assess M\u00b4 elange\u2019s ability to select GPU allocations that meet the specified TPOT SLO. To do so, we provision actual cloud GPU instances based on M\u00b4 elange\u2019s selected allocation for each of the six experiment 13 Figure 12: Experiment TPOT CDFs. Figure 13: TPOT CDF from unknown output length experiment. settings in 6.2 at 4 requests/s. We deploy Llama2-7b with vLLM-0.2.7 on each of the provisioned GPUs. We sample request sizes randomly from the chosen dataset to serve 2000 live requests. We record the latency of each request and divide by output token length to derive average TPOT. Load Balancer. Most settings use multiple GPU instances, requiring a load balancer to distribute requests across them. The problem of load balancing variable-size requests to heterogeneous backends has been previously explored [11], and we leave it to future work to create adaptations for serving LLMs on heterogeneous GPUs. We instead use a simple variation of Join Shortest Queue (JSQ) routing [14]: the load balancer tracks outstanding requests for each GPU, and converts them to percent load as described in \u00a7 5.2.3. Upon receiving a new request, the load balancer chooses a GPU backend such that the resulting percent load on the chosen GPU is minimized relative to choosing any other GPU. This policy performed well in our experiments, but we expect that improvements to the load balancing policy will reduce tail latency. Results. Figure 12 presents CDFs of the observed average TPOTs across experiments. With an SLO of 120ms, over 99.95% of requests met SLO. When the SLO was tightened to 40ms, SLO adherence reduced to over 99.5% of requests. M\u00b4 elange effectively chose GPU allocations that reduce cost while adhering to latency objectives, however, we recognize that services may require even higher SLO adherence, so we investigated the source of SLO violations in our experiment. SLO Violation Investigation. Of all requests that violated TPOT SLO, we found that 84% failed to meet SLO due to one of two reasons: request rate bursts or co-location with large requests. In our experiments, requests are sent according to a Poisson process, which occasionally creates short-lived bursts that overload the GPU capacity. Further, we choose the size of model request by randomly sampling from the configured dataset. Occasionally, several large requests are chosen in sequence, which can temporarily exceed the service capacity. In an online production environment, it is common practice to over-provision resources in order to absorb such bursts and other load variations. Within our framework, a desired over-provisioning rate (say, 20%) can be achieved by increasing the request rate input to the solver by the same proportion (20%). We discuss the future work of practically deploying a system based on M\u00b4 elange in \u00a7 7. 6.4 Unknown Output Length As discussed in \u00a7 3.2, in order to focus on measuring the quality of M\u00b4 elange\u2019s chosen GPU allocation, our evaluations utilize a load balancing policy that knows output lengths. Given that this is not a realistic assumption, we briefly evaluate M\u00b4 elange\u2019s performance with a simple load balancing policy that is unaware of output lengths. We again note that we are actively working on addressing the limitations of unknown output lengths, and believe that LLM-specific load balancing that addresses this challenge is an exciting area for future work. We repeated the SLO satisfaction experiment (\u00a7 6.3) on the Arena dataset with a TPOT SLO of 40ms, but restricted the load balancer to only see request input lengths. The load balancer estimates output length by computing the average of all previous requests\u2019 output lengths. Otherwise, load balancing is 14 Request Rate Arena, SLO=120ms Arena, SLO=40ms PubMed, SLO=120ms PubMed, SLO=40ms Mix, SLO=120ms Mix, SLO=40ms 1 0.137 0.177 0.232 0.295 0.168 0.336 2 0.194 0.265 0.234 0.334 0.253 0.381 4 0.192 0.346 0.287 0.381 0.297 0.459 8 0.248 0.433 0.269 0.384 0.321 0.545 16 0.299 0.448 0.389 0.509 0.439 0.537 32 0.316 0.494 0.791 0.96 0.912 1.14 Table 2: Solver execution time. performed identically to experiments in \u00a7 6.3. Figure 13 presents the experiment\u2019s TPOT CDF. Only 97.2% of requests met the 40ms deadline, compared to 99.5% in the setting where output length is known, a 5.6\u00d7 increase in SLO violations. Almost all (91%) of the additional SLO violations were due to large requests landing on a lower-end GPU that would have otherwise landed on a higher-end GPU if the output length was known. This result demonstrates that errors in estimating output length can manifest as increased tail latency due to poor load balancing decisions, further motivating future work on load balancing for LLMs. Nevertheless, we show that over-provisioning can account for the error in predicting output lengths. We re-ran the experiment, but inflated M\u00b4 elange\u2019s request rate input by 5%, and observed that SLO adherence jumped back up to over 99.5%. 6.5 Solver Time We present the solver execution time from each experiment in Table 2. Across all datasets and request rates, the solver\u2019s execution time remains under 1.2 seconds, which is negligible compared to workload execution time. We observe a modest increase in solver execution time with higher request volumes, attributed to the greater complexity in slice assignment due to a greater number of required GPUs. However, this increase is sub-linear relative to the increase in request rate, and the solver\u2019s execution time remains practical. Further, the execution of the solver is a one-time event. Users are required to run the solver only prior to deployment or when there is a significant change in the distribution of request sizes or rates. 7 Future Work There are several interesting directions related to leveraging heterogeneous GPUs for LLM serving. First, adapting heterogeneity-aware load balancing policies specifically for LLM systems where output length is unknown could reduce tail latency that occur due to poor balancing decisions. Further, we believe that generative models beyond LLMs, including image generation, video generation, and embedding models, each of which could be benefited by heterogeneous serving systems. Finally, M\u00b4 elange effectively derives the best GPU allocation for a fixed workload distribution and request rate, but does not address other challenges of deploying a live LLM service such as handling GPU unavailability or responding to dynamically changing request rate and request size distribution. 8 Conclusion In this study, we conduct an analysis of GPU cost efficiency in LLM service deployments, and identify three key factors (request sizes, request rates, and Service Level Objectives (SLOs)) that significantly impact GPU cost efficiency. Based on these findings, we introduce M\u00b4 elange, a framework for deriving the minimal-cost GPU allocation that attains SLO for a given LLM service specification. We frame the task of GPU selection as a cost-aware bin-packing problem and formulate it as an integer linear program. Through evaluations on a range of GPUs, request sizes, request rates, and latency SLOs, M\u00b4 elange consistently demonstrates significant reductions in deployment costs (up to 77%) while providing high SLO attainment. 15" + }, + { + "url": "http://arxiv.org/abs/2211.10438v7", + "title": "SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models", + "abstract": "Large language models (LLMs) show excellent performance but are compute- and\nmemory-intensive. Quantization can reduce memory and accelerate inference.\nHowever, existing methods cannot maintain accuracy and hardware efficiency at\nthe same time. We propose SmoothQuant, a training-free, accuracy-preserving,\nand general-purpose post-training quantization (PTQ) solution to enable 8-bit\nweight, 8-bit activation (W8A8) quantization for LLMs. Based on the fact that\nweights are easy to quantize while activations are not, SmoothQuant smooths the\nactivation outliers by offline migrating the quantization difficulty from\nactivations to weights with a mathematically equivalent transformation.\nSmoothQuant enables an INT8 quantization of both weights and activations for\nall the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG,\nLlama-1/2, Falcon, Mistral, and Mixtral models. We demonstrate up to 1.56x\nspeedup and 2x memory reduction for LLMs with negligible loss in accuracy.\nSmoothQuant enables serving 530B LLM within a single node. Our work offers a\nturn-key solution that reduces hardware costs and democratizes LLMs. Code is\navailable at https://github.com/mit-han-lab/smoothquant.", + "authors": "Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, Song Han", + "published": "2022-11-18", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.01188v2", + "title": "Petals: Collaborative Inference and Fine-tuning of Large Models", + "abstract": "Many NLP tasks benefit from using large language models (LLMs) that often\nhave more than 100 billion parameters. With the release of BLOOM-176B and\nOPT-175B, everyone can download pretrained models of this scale. Still, using\nthese models requires high-end hardware unavailable to many researchers. In\nsome cases, LLMs can be used more affordably via RAM offloading or hosted APIs.\nHowever, these techniques have innate limitations: offloading is too slow for\ninteractive inference, while APIs are not flexible enough for research that\nrequires access to weights, attention or logits. In this work, we propose\nPetals - a system for inference and fine-tuning of large models collaboratively\nby joining the resources of multiple parties. We demonstrate that this strategy\noutperforms offloading for very large models, running inference of BLOOM-176B\non consumer GPUs with $\\approx$ 1 step per second, which is enough for many\ninteractive LLM applications. Unlike most inference APIs, Petals also natively\nexposes hidden states of served models, allowing to train and share custom\nmodel extensions based on efficient fine-tuning methods.", + "authors": "Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel", + "published": "2022-09-02", + "updated": "2023-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.00774v3", + "title": "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot", + "abstract": "We show for the first time that large-scale generative pretrained transformer\n(GPT) family models can be pruned to at least 50% sparsity in one-shot, without\nany retraining, at minimal loss of accuracy. This is achieved via a new pruning\nmethod called SparseGPT, specifically designed to work efficiently and\naccurately on massive GPT-family models. We can execute SparseGPT on the\nlargest available open-source models, OPT-175B and BLOOM-176B, in under 4.5\nhours, and can reach 60% unstructured sparsity with negligible increase in\nperplexity: remarkably, more than 100 billion weights from these models can be\nignored at inference time. SparseGPT generalizes to semi-structured (2:4 and\n4:8) patterns, and is compatible with weight quantization approaches. The code\nis available at: https://github.com/IST-DASLab/sparsegpt.", + "authors": "Elias Frantar, Dan Alistarh", + "published": "2023-01-02", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.00978v4", + "title": "AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration", + "abstract": "Large language models (LLMs) have fundamentally transformed the capabilities\nof numerous applications, from natural language processing to more intricate\ndomain-specific tasks in robotics and autonomous driving. Moreover, the\nimportance of on-device LLMs has grown significantly in the recent years.\nRunning LLMs on edge devices not only promises reduced latency and improved\nuser experience but also aligns with the increasing need for user privacy, as\ndata processing can occur locally. However, the astronomical model sizes of\nmodern LLMs and constraints of the edge devices, primarily in terms of memory\nsize and bandwidth, pose significant deployment challenges. In this paper, we\npropose Activation-aware Weight Quantization (AWQ), a hardware-friendly\napproach for LLM low-bit weight-only quantization. Our method is based on the\nobservation that weights are not equally important: protecting only 1% of\nsalient weights can greatly reduce quantization error. We then propose to\nsearch for the optimal per-channel scaling that protects the salient weights by\nobserving the activation, not weights. AWQ does not rely on any backpropagation\nor reconstruction, so it can well preserve LLMs' generalization ability on\ndifferent domains and modalities, without overfitting to the calibration set.\nAWQ outperforms existing work on various language modeling and domain-specific\nbenchmarks (coding and math). Thanks to better generalization, it achieves\nexcellent quantization performance for instruction-tuned LMs and, for the first\ntime, multi-modal LMs. Alongside AWQ, we implement TinyChat, an efficient and\nflexible inference framework tailored for on-device LLM/VLMs, offering more\nthan 3x speedup over the Huggingface FP16 implementation on both desktop and\nmobile GPUs. It also democratizes the deployment of the 70B Llama-2 model on\nmobile GPUs.", + "authors": "Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, Song Han", + "published": "2023-06-01", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.16369v1", + "title": "SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills", + "abstract": "Large Language Model (LLM) inference consists of two distinct phases -\nprefill phase which processes the input prompt and decode phase which generates\noutput tokens autoregressively. While the prefill phase effectively saturates\nGPU compute at small batch sizes, the decode phase results in low compute\nutilization as it generates one token at a time per request. The varying\nprefill and decode times also lead to imbalance across micro-batches when using\npipeline parallelism, resulting in further inefficiency due to bubbles.\n We present SARATHI to address these challenges. SARATHI employs\nchunked-prefills, which splits a prefill request into equal sized chunks, and\ndecode-maximal batching, which constructs a batch using a single prefill chunk\nand populates the remaining slots with decodes. During inference, the prefill\nchunk saturates GPU compute, while the decode requests 'piggyback' and cost up\nto an order of magnitude less compared to a decode-only batch. Chunked-prefills\nallows constructing multiple decode-maximal batches from a single prefill\nrequest, maximizing coverage of decodes that can piggyback. Furthermore, the\nuniform compute design of these batches ameliorates the imbalance between\nmicro-batches, significantly reducing pipeline bubbles.\n Our techniques yield significant improvements in inference performance across\nmodels and hardware. For the LLaMA-13B model on A6000 GPU, SARATHI improves\ndecode throughput by up to 10x, and accelerates end-to-end throughput by up to\n1.33x. For LLaMa-33B on A100 GPU, we achieve 1.25x higher end-to-end-throughput\nand up to 4.25x higher decode throughput. When used with pipeline parallelism\non GPT-3, SARATHI reduces bubbles by 6.29x, resulting in an end-to-end\nthroughput improvement of 1.91x.", + "authors": "Amey Agrawal, Ashish Panwar, Jayashree Mohan, Nipun Kwatra, Bhargav S. Gulavani, Ramachandran Ramjee", + "published": "2023-08-31", + "updated": "2023-08-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.09670v2", + "title": "DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving", + "abstract": "DistServe improves the performance of large language models (LLMs) serving by\ndisaggregating the prefill and decoding computation. Existing LLM serving\nsystems colocate the two phases and batch the computation of prefill and\ndecoding across all users and requests. We find that this strategy not only\nleads to strong prefill-decoding interferences but also couples the resource\nallocation and parallelism plans for both phases. LLM applications often\nemphasize individual latency for each phase: time to first token (TTFT) for the\nprefill phase and time per output token (TPOT) of each request for the decoding\nphase. In the presence of stringent latency requirements, existing systems have\nto prioritize one latency over the other, or over-provision compute resources\nto meet both.\n DistServe assigns prefill and decoding computation to different GPUs, hence\neliminating prefill-decoding interferences. Given the application's TTFT and\nTPOT requirements, DistServe co-optimizes the resource allocation and\nparallelism strategy tailored for each phase. DistServe also places the two\nphases according to the serving cluster's bandwidth to minimize the\ncommunication caused by disaggregation. As a result, DistServe significantly\nimproves LLM serving performance in terms of the maximum rate that can be\nserved within both TTFT and TPOT constraints on each GPU. Our evaluations show\nthat on various popular LLMs, applications, and latency requirements, DistServe\ncan serve 4.48x more requests or 10.2x tighter SLO, compared to\nstate-of-the-art systems, while staying within latency constraints for > 90% of\nrequests.", + "authors": "Yinmin Zhong, Shengyu Liu, Junda Chen, Jianbo Hu, Yibo Zhu, Xuanzhe Liu, Xin Jin, Hao Zhang", + "published": "2024-01-18", + "updated": "2024-03-19", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.09213v1", + "title": "Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads", + "abstract": "Specialized accelerators such as GPUs, TPUs, FPGAs, and custom ASICs have\nbeen increasingly deployed to train deep learning models. These accelerators\nexhibit heterogeneous performance behavior across model architectures. Existing\nschedulers for clusters of accelerators, which are used to arbitrate these\nexpensive training resources across many users, have shown how to optimize for\nvarious multi-job, multi-user objectives, like fairness and makespan.\nUnfortunately, existing schedulers largely do not consider performance\nheterogeneity. In this paper, we propose Gavel, a heterogeneity-aware scheduler\nthat systematically generalizes a wide range of existing scheduling policies.\nGavel expresses these policies as optimization problems, making it easy to\noptimize for objectives in a heterogeneity-aware way, while also being\ncognizant of performance optimizations like space sharing. Gavel then uses a\nround-based scheduling mechanism to ensure jobs receive their ideal allocation\ngiven the target scheduling policy. Gavel's heterogeneity-aware policies allow\na heterogeneous cluster to sustain higher input load, and improve end\nobjectives such as average job completion time and makespan by up to 3.5x\ncompared to heterogeneity-agnostic policies.", + "authors": "Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia", + "published": "2020-08-20", + "updated": "2020-08-20", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.18677v1", + "title": "Splitwise: Efficient generative LLM inference using phase splitting", + "abstract": "Recent innovations in generative large language models (LLMs) have made their\napplications and use-cases ubiquitous. This has led to large-scale deployments\nof these models, using complex, expensive, and power-hungry AI accelerators,\nmost commonly GPUs. These developments make LLM inference efficiency an\nimportant challenge. Based on our extensive characterization, we find that\nthere are two main phases during an LLM inference request: a compute-intensive\nprompt computation, and a memory-intensive token generation, each with distinct\nlatency, throughput, memory, and power characteristics. Despite\nstate-of-the-art batching and scheduling, the token generation phase\nunderutilizes compute resources. Specifically, unlike compute-intensive prompt\ncomputation phases, token generation phases do not require the compute\ncapability of the latest GPUs, and can be run with lower power and cost.\n With Splitwise, we propose splitting the two phases of a LLM inference\nrequest on to separate machines. This allows us to use hardware that is\nwell-suited for each phase, and provision resources independently per phase.\nHowever, splitting an inference request across machines requires state transfer\nfrom the machine running prompt computation over to the machine generating\ntokens. We implement and optimize this state transfer using the fast back-plane\ninterconnects available in today's GPU clusters.\n We use the Splitwise technique to design LLM inference clusters using the\nsame or different types of machines for the prompt computation and token\ngeneration phases. Our clusters are optimized for three key objectives:\nthroughput, cost, and power. In particular, we show that we can achieve 1.4x\nhigher throughput at 20% lower cost than current designs. Alternatively, we can\nachieve 2.35x more throughput with the same cost and power budgets.", + "authors": "Pratyush Patel, Esha Choukse, Chaojie Zhang, \u00cd\u00f1igo Goiri, Aashaka Shah, Saeed Maleki, Ricardo Bianchini", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.AR", + "cats": [ + "cs.AR", + "cs.DC", + "I.2.0, I.3.1, C.4" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.13878v1", + "title": "Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism", + "abstract": "Transformer models have achieved state-of-the-art performance on various\ndomains of applications and gradually becomes the foundations of the advanced\nlarge deep learning (DL) models. However, how to train these models over\nmultiple GPUs efficiently is still challenging due to a large number of\nparallelism choices. Existing DL systems either rely on manual efforts to make\ndistributed training plans or apply parallelism combinations within a very\nlimited search space. In this approach, we propose Galvatron, a new system\nframework that incorporates multiple popular parallelism dimensions and\nautomatically finds the most efficient hybrid parallelism strategy. To better\nexplore such a rarely huge search space, we 1) involve a decision tree to make\ndecomposition and pruning based on some reasonable intuitions, and then 2)\ndesign a dynamic programming search algorithm to generate the optimal plan.\nEvaluations on four representative Transformer workloads show that Galvatron\ncould perform automatically distributed training with different GPU memory\nbudgets. Among all evluated scenarios, Galvatron always achieves superior\nsystem throughput compared to previous work with limited parallelism.", + "authors": "Xupeng Miao, Yujie Wang, Youhe Jiang, Chunan Shi, Xiaonan Nie, Hailin Zhang, Bin Cui", + "published": "2022-11-25", + "updated": "2022-11-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DB", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.14135v2", + "title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness", + "abstract": "Transformers are slow and memory-hungry on long sequences, since the time and\nmemory complexity of self-attention are quadratic in sequence length.\nApproximate attention methods have attempted to address this problem by trading\noff model quality to reduce the compute complexity, but often do not achieve\nwall-clock speedup. We argue that a missing principle is making attention\nalgorithms IO-aware -- accounting for reads and writes between levels of GPU\nmemory. We propose FlashAttention, an IO-aware exact attention algorithm that\nuses tiling to reduce the number of memory reads/writes between GPU high\nbandwidth memory (HBM) and GPU on-chip SRAM. We analyze the IO complexity of\nFlashAttention, showing that it requires fewer HBM accesses than standard\nattention, and is optimal for a range of SRAM sizes. We also extend\nFlashAttention to block-sparse attention, yielding an approximate attention\nalgorithm that is faster than any existing approximate attention method.\nFlashAttention trains Transformers faster than existing baselines: 15%\nend-to-end wall-clock speedup on BERT-large (seq. length 512) compared to the\nMLPerf 1.1 training speed record, 3$\\times$ speedup on GPT-2 (seq. length 1K),\nand 2.4$\\times$ speedup on long-range arena (seq. length 1K-4K). FlashAttention\nand block-sparse FlashAttention enable longer context in Transformers, yielding\nhigher quality models (0.7 better perplexity on GPT-2 and 6.4 points of lift on\nlong-document classification) and entirely new capabilities: the first\nTransformers to achieve better-than-chance performance on the Path-X challenge\n(seq. length 16K, 61.4% accuracy) and Path-256 (seq. length 64K, 63.1%\naccuracy).", + "authors": "Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher R\u00e9", + "published": "2022-05-27", + "updated": "2022-06-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.17192v2", + "title": "Fast Inference from Transformers via Speculative Decoding", + "abstract": "Inference from large autoregressive models like Transformers is slow -\ndecoding K tokens takes K serial runs of the model. In this work we introduce\nspeculative decoding - an algorithm to sample from autoregressive models faster\nwithout any changes to the outputs, by computing several tokens in parallel. At\nthe heart of our approach lie the observations that (1) hard language-modeling\ntasks often include easier subtasks that can be approximated well by more\nefficient models, and (2) using speculative execution and a novel sampling\nmethod, we can make exact decoding from the large models faster, by running\nthem in parallel on the outputs of the approximation models, potentially\ngenerating several tokens concurrently, and without changing the distribution.\nOur method can accelerate existing off-the-shelf models without retraining or\narchitecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration\ncompared to the standard T5X implementation, with identical outputs.", + "authors": "Yaniv Leviathan, Matan Kalman, Yossi Matias", + "published": "2022-11-30", + "updated": "2023-05-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.15566v1", + "title": "SpotServe: Serving Generative Large Language Models on Preemptible Instances", + "abstract": "The high computational and memory requirements of generative large language\nmodels (LLMs) make it challenging to serve them cheaply. This paper aims to\nreduce the monetary cost for serving LLMs by leveraging preemptible GPU\ninstances on modern clouds, which offer accesses to spare GPUs at a much\ncheaper price than regular instances but may be preempted by the cloud at any\ntime. Serving LLMs on preemptible instances requires addressing challenges\ninduced by frequent instance preemptions and the necessity of migrating\ninstances to handle these preemptions.\n This paper presents SpotServe, the first distributed LLM serving system on\npreemptible instances. Several key techniques in SpotServe realize fast and\nreliable serving of generative LLMs on cheap preemptible instances. First,\nSpotServe dynamically adapts the LLM parallelization configuration for dynamic\ninstance availability and fluctuating workload, while balancing the trade-off\namong the overall throughput, inference latency and monetary costs. Second, to\nminimize the cost of migrating instances for dynamic reparallelization, the\ntask of migrating instances is formulated as a bipartite graph matching\nproblem, which uses the Kuhn-Munkres algorithm to identify an optimal migration\nplan that minimizes communications. Finally, to take advantage of the grace\nperiod offered by modern clouds, we introduce stateful inference recovery, a\nnew inference mechanism that commits inference progress at a much finer\ngranularity and allows SpotServe to cheaply resume inference upon preemption.\nWe evaluate on real spot instance preemption traces and various popular LLMs\nand show that SpotServe can reduce the P99 tail latency by 2.4 - 9.1x compared\nwith the best existing LLM serving systems. We also show that SpotServe can\nleverage the price advantage of preemptive instances, saving 54% monetary cost\ncompared with only using on-demand instances.", + "authors": "Xupeng Miao, Chunan Shi, Jiangfei Duan, Xiaoli Xi, Dahua Lin, Bin Cui, Zhihao Jia", + "published": "2023-11-27", + "updated": "2023-11-27", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.05920v1", + "title": "Fast Distributed Inference Serving for Large Language Models", + "abstract": "Large language models (LLMs) power a new generation of interactive AI\napplications exemplified by ChatGPT. The interactive nature of these\napplications demand low job completion time (JCT) for model inference. Existing\nLLM serving systems use run-to-completion processing for inference jobs, which\nsuffers from head-of-line blocking and long JCT. We present FastServe, a\ndistributed inference serving system for LLMs. FastServe exploits the\nautoregressive pattern of LLM inference to enable preemption at the granularity\nof each output token. FastServe uses preemptive scheduling to minimize JCT with\na novel skip-join Multi-Level Feedback Queue scheduler. Based on the new semi\ninformation-agnostic setting of LLM inference, the scheduler leverages the\ninput length information to assign an appropriate initial queue for each\narrival job to join. The higher priority queues than the joined queue are\nskipped to reduce demotions. We design an efficient GPU memory management\nmechanism that proactively offloads and uploads intermediate states between GPU\nmemory and host memory for LLM inference. We build a system prototype of\nFastServe based on NVIDIA FasterTransformer. Experimental results show that\ncompared to the state-of-the-art solution Orca, FastServe improves the average\nand tail JCT by up to 5.1$\\times$ and 6.4$\\times$, respectively.", + "authors": "Bingyang Wu, Yinmin Zhong, Zili Zhang, Gang Huang, Xuanzhe Liu, Xin Jin", + "published": "2023-05-10", + "updated": "2023-05-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.07104v1", + "title": "Efficiently Programming Large Language Models using SGLang", + "abstract": "Large language models (LLMs) are increasingly used for complex tasks\nrequiring multiple chained generation calls, advanced prompting techniques,\ncontrol flow, and interaction with external environments. However, efficient\nsystems for programming and executing these applications are lacking. To bridge\nthis gap, we introduce SGLang, a Structured Generation Language for LLMs.\nSGLang is designed for the efficient programming of LLMs and incorporates\nprimitives for common LLM programming patterns. We have implemented SGLang as a\ndomain-specific language embedded in Python, and we developed an interpreter, a\ncompiler, and a high-performance runtime for SGLang. These components work\ntogether to enable optimizations such as parallelism, batching, caching,\nsharing, and other compilation techniques. Additionally, we propose\nRadixAttention, a novel technique that maintains a Least Recently Used (LRU)\ncache of the Key-Value (KV) cache for all requests in a radix tree, enabling\nautomatic KV cache reuse across multiple generation calls at runtime. SGLang\nsimplifies the writing of LLM programs and boosts execution efficiency. Our\nexperiments demonstrate that SGLang can speed up common LLM tasks by up to 5x,\nwhile reducing code complexity and enhancing control.", + "authors": "Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff Huang, Chuyue Sun, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, Ying Sheng", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.PL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.17323v2", + "title": "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", + "abstract": "Generative Pre-trained Transformer models, known as GPT or OPT, set\nthemselves apart through breakthrough performance across complex language\nmodelling tasks, but also by their extremely high computational and storage\ncosts. Specifically, due to their massive size, even inference for large,\nhighly-accurate GPT models may require multiple performant GPUs, which limits\nthe usability of such models. While there is emerging work on relieving this\npressure via model compression, the applicability and performance of existing\ncompression techniques is limited by the scale and complexity of GPT models. In\nthis paper, we address this challenge, and propose GPTQ, a new one-shot weight\nquantization method based on approximate second-order information, that is both\nhighly-accurate and highly-efficient. Specifically, GPTQ can quantize GPT\nmodels with 175 billion parameters in approximately four GPU hours, reducing\nthe bitwidth down to 3 or 4 bits per weight, with negligible accuracy\ndegradation relative to the uncompressed baseline. Our method more than doubles\nthe compression gains relative to previously-proposed one-shot quantization\nmethods, preserving accuracy, allowing us for the first time to execute an 175\nbillion-parameter model inside a single GPU for generative inference. Moreover,\nwe also show that our method can still provide reasonable accuracy in the\nextreme quantization regime, in which weights are quantized to 2-bit or even\nternary quantization levels. We show experimentally that these improvements can\nbe leveraged for end-to-end inference speedups over FP16, of around 3.25x when\nusing high-end GPUs (NVIDIA A100) and 4.5x when using more cost-effective ones\n(NVIDIA A6000). The implementation is available at\nhttps://github.com/IST-DASLab/gptq.", + "authors": "Elias Frantar, Saleh Ashkboos, Torsten Hoefler, Dan Alistarh", + "published": "2022-10-31", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.11514v2", + "title": "HexGen: Generative Inference of Large-Scale Foundation Model over Heterogeneous Decentralized Environment", + "abstract": "Serving generative inference of the large-scale foundation model is a crucial\ncomponent of contemporary AI applications. This paper focuses on deploying such\nservices in a heterogeneous and decentralized setting to mitigate the\nsubstantial inference costs typically associated with centralized data centers.\nTowards this end, we propose HexGen, a flexible distributed inference engine\nthat uniquely supports the asymmetric partition of generative inference\ncomputations over both tensor model parallelism and pipeline parallelism and\nallows for effective deployment across diverse GPUs interconnected by a fully\nheterogeneous network. We further propose a sophisticated scheduling algorithm\ngrounded in constrained optimization that can adaptively assign asymmetric\ninference computation across the GPUs to fulfill inference requests while\nmaintaining acceptable latency levels. We conduct an extensive evaluation to\nverify the efficiency of HexGen by serving the state-of-the-art Llama-2 (70B)\nmodel. The results suggest that HexGen can choose to achieve up to 2.3 times\nlower latency deadlines or tolerate up to 4 times more request rates compared\nwith the homogeneous baseline given the same budget.", + "authors": "Youhe Jiang, Ran Yan, Xiaozhe Yao, Yang Zhou, Beidi Chen, Binhang Yuan", + "published": "2023-11-20", + "updated": "2024-02-04", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.06180v1", + "title": "Efficient Memory Management for Large Language Model Serving with PagedAttention", + "abstract": "High throughput serving of large language models (LLMs) requires batching\nsufficiently many requests at a time. However, existing systems struggle\nbecause the key-value cache (KV cache) memory for each request is huge and\ngrows and shrinks dynamically. When managed inefficiently, this memory can be\nsignificantly wasted by fragmentation and redundant duplication, limiting the\nbatch size. To address this problem, we propose PagedAttention, an attention\nalgorithm inspired by the classical virtual memory and paging techniques in\noperating systems. On top of it, we build vLLM, an LLM serving system that\nachieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV\ncache within and across requests to further reduce memory usage. Our\nevaluations show that vLLM improves the throughput of popular LLMs by\n2-4$\\times$ with the same level of latency compared to the state-of-the-art\nsystems, such as FasterTransformer and Orca. The improvement is more pronounced\nwith longer sequences, larger models, and more complex decoding algorithms.\nvLLM's source code is publicly available at\nhttps://github.com/vllm-project/vllm", + "authors": "Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, Ion Stoica", + "published": "2023-09-12", + "updated": "2023-09-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.07884v2", + "title": "Fair Abstractive Summarization of Diverse Perspectives", + "abstract": "People from different social and demographic groups express diverse\nperspectives and conflicting opinions on a broad set of topics such as product\nreviews, healthcare, law, and politics. A fair summary should provide a\ncomprehensive coverage of diverse perspectives without underrepresenting\ncertain groups. However, current work in summarization metrics and Large\nLanguage Models (LLMs) evaluation has not explored fair abstractive\nsummarization. In this paper, we systematically investigate fair abstractive\nsummarization for user-generated data. We first formally define fairness in\nabstractive summarization as not underrepresenting perspectives of any groups\nof people, and we propose four reference-free automatic metrics by measuring\nthe differences between target and source perspectives. We evaluate nine LLMs,\nincluding three GPT models, four LLaMA models, PaLM 2, and Claude, on six\ndatasets collected from social media, online reviews, and recorded transcripts.\nExperiments show that both the model-generated and the human-written reference\nsummaries suffer from low fairness. We conduct a comprehensive analysis of the\ncommon factors influencing fairness and propose three simple but effective\nmethods to alleviate unfair summarization. Our dataset and code are available\nat https://github.com/psunlpgroup/FairSumm.", + "authors": "Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang", + "published": "2023-11-14", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.03838v2", + "title": "RADAR: Robust AI-Text Detection via Adversarial Learning", + "abstract": "Recent advances in large language models (LLMs) and the intensifying\npopularity of ChatGPT-like applications have blurred the boundary of\nhigh-quality text generation between humans and machines. However, in addition\nto the anticipated revolutionary changes to our technology and society, the\ndifficulty of distinguishing LLM-generated texts (AI-text) from human-generated\ntexts poses new challenges of misuse and fairness, such as fake content\ngeneration, plagiarism, and false accusations of innocent writers. While\nexisting works show that current AI-text detectors are not robust to LLM-based\nparaphrasing, this paper aims to bridge this gap by proposing a new framework\ncalled RADAR, which jointly trains a robust AI-text detector via adversarial\nlearning. RADAR is based on adversarial training of a paraphraser and a\ndetector. The paraphraser's goal is to generate realistic content to evade\nAI-text detection. RADAR uses the feedback from the detector to update the\nparaphraser, and vice versa. Evaluated with 8 different LLMs (Pythia, Dolly\n2.0, Palmyra, Camel, GPT-J, Dolly 1.0, LLaMA, and Vicuna) across 4 datasets,\nexperimental results show that RADAR significantly outperforms existing AI-text\ndetection methods, especially when paraphrasing is in place. We also identify\nthe strong transferability of RADAR from instruction-tuned LLMs to other LLMs,\nand evaluate the improved capability of RADAR via GPT-3.5-Turbo.", + "authors": "Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho", + "published": "2023-07-07", + "updated": "2023-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00811v1", + "title": "Cognitive Bias in High-Stakes Decision-Making with LLMs", + "abstract": "Large language models (LLMs) offer significant potential as tools to support\nan expanding range of decision-making tasks. However, given their training on\nhuman (created) data, LLMs can inherit both societal biases against protected\ngroups, as well as be subject to cognitive bias. Such human-like bias can\nimpede fair and explainable decisions made with LLM assistance. Our work\nintroduces BiasBuster, a framework designed to uncover, evaluate, and mitigate\ncognitive bias in LLMs, particularly in high-stakes decision-making tasks.\nInspired by prior research in psychology and cognitive sciences, we develop a\ndataset containing 16,800 prompts to evaluate different cognitive biases (e.g.,\nprompt-induced, sequential, inherent). We test various bias mitigation\nstrategies, amidst proposing a novel method using LLMs to debias their own\nprompts. Our analysis provides a comprehensive picture on the presence and\neffects of cognitive bias across different commercial and open-source models.\nWe demonstrate that our self-help debiasing effectively mitigate cognitive bias\nwithout having to manually craft examples for each bias type.", + "authors": "Jessica Echterhoff, Yao Liu, Abeer Alessa, Julian McAuley, Zexue He", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.11653v2", + "title": "\"It's a Fair Game\", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents", + "abstract": "The widespread use of Large Language Model (LLM)-based conversational agents\n(CAs), especially in high-stakes domains, raises many privacy concerns.\nBuilding ethical LLM-based CAs that respect user privacy requires an in-depth\nunderstanding of the privacy risks that concern users the most. However,\nexisting research, primarily model-centered, does not provide insight into\nusers' perspectives. To bridge this gap, we analyzed sensitive disclosures in\nreal-world ChatGPT conversations and conducted semi-structured interviews with\n19 LLM-based CA users. We found that users are constantly faced with trade-offs\nbetween privacy, utility, and convenience when using LLM-based CAs. However,\nusers' erroneous mental models and the dark patterns in system design limited\ntheir awareness and comprehension of the privacy risks. Additionally, the\nhuman-like interactions encouraged more sensitive disclosures, which\ncomplicated users' ability to navigate the trade-offs. We discuss practical\ndesign guidelines and the needs for paradigm shifts to protect the privacy of\nLLM-based CA users.", + "authors": "Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, Tianshi Li", + "published": "2023-09-20", + "updated": "2024-04-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.06003v1", + "title": "FreeEval: A Modular Framework for Trustworthy and Efficient Evaluation of Large Language Models", + "abstract": "The rapid development of large language model (LLM) evaluation methodologies\nand datasets has led to a profound challenge: integrating state-of-the-art\nevaluation techniques cost-effectively while ensuring reliability,\nreproducibility, and efficiency. Currently, there is a notable absence of a\nunified and adaptable framework that seamlessly integrates various evaluation\napproaches. Moreover, the reliability of evaluation findings is often\nquestionable due to potential data contamination, with the evaluation\nefficiency commonly overlooked when facing the substantial costs associated\nwith LLM inference. In response to these challenges, we introduce FreeEval, a\nmodular and scalable framework crafted to enable trustworthy and efficient\nautomatic evaluations of LLMs. Firstly, FreeEval's unified abstractions\nsimplify the integration and improve the transparency of diverse evaluation\nmethodologies, encompassing dynamic evaluation that demand sophisticated LLM\ninteractions. Secondly, the framework integrates meta-evaluation techniques\nlike human evaluation and data contamination detection, which, along with\ndynamic evaluation modules in the platform, enhance the fairness of the\nevaluation outcomes. Lastly, FreeEval is designed with a high-performance\ninfrastructure, including distributed computation and caching strategies,\nenabling extensive evaluations across multi-node, multi-GPU clusters for\nopen-source and proprietary LLMs.", + "authors": "Zhuohao Yu, Chang Gao, Wenjin Yao, Yidong Wang, Zhengran Zeng, Wei Ye, Jindong Wang, Yue Zhang, Shikun Zhang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.02650v1", + "title": "Towards detecting unanticipated bias in Large Language Models", + "abstract": "Over the last year, Large Language Models (LLMs) like ChatGPT have become\nwidely available and have exhibited fairness issues similar to those in\nprevious machine learning systems. Current research is primarily focused on\nanalyzing and quantifying these biases in training data and their impact on the\ndecisions of these models, alongside developing mitigation strategies. This\nresearch largely targets well-known biases related to gender, race, ethnicity,\nand language. However, it is clear that LLMs are also affected by other, less\nobvious implicit biases. The complex and often opaque nature of these models\nmakes detecting such biases challenging, yet this is crucial due to their\npotential negative impact in various applications. In this paper, we explore\nnew avenues for detecting these unanticipated biases in LLMs, focusing\nspecifically on Uncertainty Quantification and Explainable AI methods. These\napproaches aim to assess the certainty of model decisions and to make the\ninternal decision-making processes of LLMs more transparent, thereby\nidentifying and understanding biases that are not immediately apparent. Through\nthis research, we aim to contribute to the development of fairer and more\ntransparent AI systems.", + "authors": "Anna Kruspe", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.18276v1", + "title": "Bias Neutralization Framework: Measuring Fairness in Large Language Models with Bias Intelligence Quotient (BiQ)", + "abstract": "The burgeoning influence of Large Language Models (LLMs) in shaping public\ndiscourse and decision-making underscores the imperative to address inherent\nbiases within these AI systems. In the wake of AI's expansive integration\nacross sectors, addressing racial bias in LLMs has never been more critical.\nThis paper introduces a novel framework called Comprehensive Bias\nNeutralization Framework (CBNF) which embodies an innovative approach to\nquantifying and mitigating biases within LLMs. Our framework combines the Large\nLanguage Model Bias Index (LLMBI) [Oketunji, A., Anas, M., Saina, D., (2023)]\nand Bias removaL with No Demographics (BLIND) [Orgad, H., Belinkov, Y. (2023)]\nmethodologies to create a new metric called Bias Intelligence Quotient\n(BiQ)which detects, measures, and mitigates racial bias in LLMs without\nreliance on demographic annotations.\n By introducing a new metric called BiQ that enhances LLMBI with additional\nfairness metrics, CBNF offers a multi-dimensional metric for bias assessment,\nunderscoring the necessity of a nuanced approach to fairness in AI [Mehrabi et\nal., 2021]. This paper presents a detailed analysis of Latimer AI (a language\nmodel incrementally trained on black history and culture) in comparison to\nChatGPT 3.5, illustrating Latimer AI's efficacy in detecting racial, cultural,\nand gender biases through targeted training and refined bias mitigation\nstrategies [Latimer & Bender, 2023].", + "authors": "Malur Narayan, John Pasmore, Elton Sampaio, Vijay Raghavan, Gabriella Waters", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "D.1; I.2" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.12090v1", + "title": "UP5: Unbiased Foundation Model for Fairness-aware Recommendation", + "abstract": "Recent advancements in foundation models such as large language models (LLM)\nhave propelled them to the forefront of recommender systems (RS). Moreover,\nfairness in RS is critical since many users apply it for decision-making and\ndemand fulfillment. However, at present, there is a lack of understanding\nregarding the level of fairness exhibited by recommendation foundation models\nand the appropriate methods for equitably treating different groups of users in\nfoundation models. In this paper, we focus on user-side unfairness problem and\nshow through a thorough examination that there is unfairness involved in LLMs\nthat lead to unfair recommendation results. To eliminate bias from LLM for\nfairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)\nfoundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP\nincludes two sub-modules: a personalized prefix prompt that enhances fairness\nwith respect to individual sensitive attributes, and a Prompt Mixture that\nintegrates multiple counterfactually-fair prompts for a set of sensitive\nattributes. Experiments are conducted on two real-world datasets, MovieLens-1M\nand Insurance, and results are compared with both matching-based and\nsequential-based fairness-aware recommendation models. The results show that\nUP5 achieves better recommendation performance and meanwhile exhibits a high\nlevel of fairness.", + "authors": "Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang", + "published": "2023-05-20", + "updated": "2023-05-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15451v1", + "title": "Towards Enabling FAIR Dataspaces Using Large Language Models", + "abstract": "Dataspaces have recently gained adoption across various sectors, including\ntraditionally less digitized domains such as culture. Leveraging Semantic Web\ntechnologies helps to make dataspaces FAIR, but their complexity poses a\nsignificant challenge to the adoption of dataspaces and increases their cost.\nThe advent of Large Language Models (LLMs) raises the question of how these\nmodels can support the adoption of FAIR dataspaces. In this work, we\ndemonstrate the potential of LLMs in dataspaces with a concrete example. We\nalso derive a research agenda for exploring this emerging field.", + "authors": "Benedikt T. Arnold, Johannes Theissen-Lipp, Diego Collarana, Christoph Lange, Sandra Geisler, Edward Curry, Stefan Decker", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.02839v1", + "title": "An Empirical Study of LLM-as-a-Judge for LLM Evaluation: Fine-tuned Judge Models are Task-specific Classifiers", + "abstract": "Recently, there has been a growing trend of utilizing Large Language Model\n(LLM) to evaluate the quality of other LLMs. Many studies have employed\nproprietary close-source models, especially GPT4, as the evaluator.\nAlternatively, other works have fine-tuned judge models based on open-source\nLLMs as the evaluator. In this study, we conduct an empirical study of\ndifferent judge models on their evaluation capability. Our findings indicate\nthat although the fine-tuned judge models achieve high accuracy on in-domain\ntest sets, even surpassing GPT4, they are inherently task-specific classifiers,\nand their generalizability and fairness severely underperform GPT4.", + "authors": "Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Tiejun Zhao", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11764v1", + "title": "ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs", + "abstract": "Large Language models (LLMs), while powerful, exhibit harmful social biases.\nDebiasing is often challenging due to computational costs, data constraints,\nand potential degradation of multi-task language capabilities. This work\nintroduces a novel approach utilizing ChatGPT to generate synthetic training\ndata, aiming to enhance the debiasing of LLMs. We propose two strategies:\nTargeted Prompting, which provides effective debiasing for known biases but\nnecessitates prior specification of bias in question; and General Prompting,\nwhich, while slightly less effective, offers debiasing across various\ncategories. We leverage resource-efficient LLM debiasing using adapter tuning\nand compare the effectiveness of our synthetic data to existing debiasing\ndatasets. Our results reveal that: (1) ChatGPT can efficiently produce\nhigh-quality training data for debiasing other LLMs; (2) data produced via our\napproach surpasses existing datasets in debiasing performance while also\npreserving internal knowledge of a pre-trained LLM; and (3) synthetic data\nexhibits generalizability across categories, effectively mitigating various\nbiases, including intersectional ones. These findings underscore the potential\nof synthetic data in advancing the fairness of LLMs with minimal retraining\ncost.", + "authors": "Pengrui Han, Rafal Kocielnik, Adhithya Saravanan, Roy Jiang, Or Sharir, Anima Anandkumar", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "68T50", + "I.2.7; K.4.1" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04205v2", + "title": "Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves", + "abstract": "Misunderstandings arise not only in interpersonal communication but also\nbetween humans and Large Language Models (LLMs). Such discrepancies can make\nLLMs interpret seemingly unambiguous questions in unexpected ways, yielding\nincorrect responses. While it is widely acknowledged that the quality of a\nprompt, such as a question, significantly impacts the quality of the response\nprovided by LLMs, a systematic method for crafting questions that LLMs can\nbetter comprehend is still underdeveloped. In this paper, we present a method\nnamed `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand\nquestions posed by humans and provide responses in a single prompt. This\napproach serves as a simple yet effective prompting method for improving\nperformance. We also introduce a two-step variant of RaR, where a rephrasing\nLLM first rephrases the question and then passes the original and rephrased\nquestions together to a different responding LLM. This facilitates the\neffective utilization of rephrased questions generated by one LLM with another.\nOur experiments demonstrate that our methods significantly improve the\nperformance of different models across a wide range to tasks. We further\nprovide a comprehensive comparison between RaR and the popular Chain-of-Thought\n(CoT) methods, both theoretically and empirically. We show that RaR is\ncomplementary to CoT and can be combined with CoT to achieve even better\nperformance. Our work not only contributes to enhancing LLM performance\nefficiently and effectively but also sheds light on a fair evaluation of LLM\ncapabilities. Data and codes are available at\nhttps://github.com/uclaml/Rephrase-and-Respond.", + "authors": "Yihe Deng, Weitong Zhang, Zixiang Chen, Quanquan Gu", + "published": "2023-11-07", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + } + ] + ] + }, + { + "url": "http://arxiv.org/abs/2211.12588v4", + "title": "Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks", + "abstract": "Recently, there has been significant progress in teaching language models to\nperform step-by-step reasoning to solve complex numerical reasoning tasks.\nChain-of-thoughts prompting (CoT) is by far the state-of-art method for these\ntasks. CoT uses language models to perform both reasoning and computation in\nthe multi-step `thought' process. To disentangle computation from reasoning, we\npropose `Program of Thoughts' (PoT), which uses language models (mainly Codex)\nto express the reasoning process as a program. The computation is relegated to\nan external computer, which executes the generated programs to derive the\nanswer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP,\nTabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA)\nfor both few-shot and zero-shot setups. Under both few-shot and zero-shot\nsettings, PoT can show an average performance gain over CoT by around 12\\%\nacross all the evaluated datasets. By combining PoT with self-consistency\ndecoding, we can achieve SoTA performance on all math problem datasets and\nnear-SoTA performance on financial datasets. All of our data and code are\nreleased in Github https://github.com/wenhuchen/Program-of-Thoughts", + "authors": "Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen", + "published": "2022-11-22", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.09687v4", + "title": "Graph of Thoughts: Solving Elaborate Problems with Large Language Models", + "abstract": "We introduce Graph of Thoughts (GoT): a framework that advances prompting\ncapabilities in large language models (LLMs) beyond those offered by paradigms\nsuch as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary\nadvantage of GoT is the ability to model the information generated by an LLM as\nan arbitrary graph, where units of information (\"LLM thoughts\") are vertices,\nand edges correspond to dependencies between these vertices. This approach\nenables combining arbitrary LLM thoughts into synergistic outcomes, distilling\nthe essence of whole networks of thoughts, or enhancing thoughts using feedback\nloops. We illustrate that GoT offers advantages over state of the art on\ndifferent tasks, for example increasing the quality of sorting by 62% over ToT,\nwhile simultaneously reducing costs by >31%. We ensure that GoT is extensible\nwith new thought transformations and thus can be used to spearhead new\nprompting schemes. This work brings the LLM reasoning closer to human thinking\nor brain mechanisms such as recurrence, both of which form complex networks.", + "authors": "Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler", + "published": "2023-08-18", + "updated": "2024-02-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.15164v2", + "title": "LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers", + "abstract": "Logical reasoning, i.e., deductively inferring the truth value of a\nconclusion from a set of premises, is an important task for artificial\nintelligence with wide potential impacts on science, mathematics, and society.\nWhile many prompting-based strategies have been proposed to enable Large\nLanguage Models (LLMs) to do such reasoning more effectively, they still appear\nunsatisfactory, often failing in subtle and unpredictable ways. In this work,\nwe investigate the validity of instead reformulating such tasks as modular\nneurosymbolic programming, which we call LINC: Logical Inference via\nNeurosymbolic Computation. In LINC, the LLM acts as a semantic parser,\ntranslating premises and conclusions from natural language to expressions in\nfirst-order logic. These expressions are then offloaded to an external theorem\nprover, which symbolically performs deductive inference. Leveraging this\napproach, we observe significant performance gains on FOLIO and a balanced\nsubset of ProofWriter for three different models in nearly all experimental\nconditions we evaluate. On ProofWriter, augmenting the comparatively small\nopen-source StarCoder+ (15.5B parameters) with LINC even outperforms GPT-3.5\nand GPT-4 with Chain-of-Thought (CoT) prompting by an absolute 38% and 10%,\nrespectively. When used with GPT-4, LINC scores 26% higher than CoT on\nProofWriter while performing comparatively on FOLIO. Further analysis reveals\nthat although both methods on average succeed roughly equally often on this\ndataset, they exhibit distinct and complementary failure modes. We thus provide\npromising evidence for how logical reasoning over natural language can be\ntackled through jointly leveraging LLMs alongside symbolic provers. All\ncorresponding code is publicly available at https://github.com/benlipkin/linc", + "authors": "Theo X. Olausson, Alex Gu, Benjamin Lipkin, Cedegao E. Zhang, Armando Solar-Lezama, Joshua B. Tenenbaum, Roger Levy", + "published": "2023-10-23", + "updated": "2024-02-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.13718v6", + "title": "Exploring Self-supervised Logic-enhanced Training for Large Language Models", + "abstract": "Existing efforts to improve logical reasoning ability of language models have\npredominantly relied on supervised fine-tuning, hindering generalization to new\ndomains and/or tasks. The development of Large Langauge Models (LLMs) has\ndemonstrated the capacity of compressing abundant knowledge into a single\nproxy, enabling them to tackle multiple tasks effectively. Our preliminary\nexperiments, nevertheless, show that LLMs do not show capability on logical\nreasoning. The performance of LLMs on logical reasoning benchmarks is far\nbehind the existing state-of-the-art baselines. In this paper, we make the\nfirst attempt to investigate the feasibility of incorporating logical knowledge\nthrough self-supervised post-training, and activating it via in-context\nlearning, which we termed as LogicLLM. Specifically, we devise an\nauto-regressive objective variant of MERIt and integrate it with two LLM\nseries, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to\n13 billion. The results on two challenging logical reasoning benchmarks\ndemonstrate the effectiveness of LogicLLM. Besides, we conduct extensive\nablation studies to analyze the key factors in designing logic-oriented proxy\ntasks.", + "authors": "Fangkai Jiao, Zhiyang Teng, Bosheng Ding, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty", + "published": "2023-05-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.09712v1", + "title": "Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning", + "abstract": "Large language models (LLMs) have been shown to be capable of impressive\nfew-shot generalisation to new tasks. However, they still tend to perform\npoorly on multi-step logical reasoning problems. Here we carry out a\ncomprehensive evaluation of LLMs on 50 tasks that probe different aspects of\nlogical reasoning. We show that language models tend to perform fairly well at\nsingle step inference or entailment tasks, but struggle to chain together\nmultiple reasoning steps to solve more complex problems. In light of this, we\npropose a Selection-Inference (SI) framework that exploits pre-trained LLMs as\ngeneral processing modules, and alternates between selection and inference to\ngenerate a series of interpretable, casual reasoning steps leading to the final\nanswer. We show that a 7B parameter LLM used within the SI framework in a\n5-shot generalisation setting, with no fine-tuning, yields a performance\nimprovement of over 100% compared to an equivalent vanilla baseline on a suite\nof 10 logical reasoning tasks. The same model in the same setting even\noutperforms a significantly larger 280B parameter baseline on the same suite of\ntasks. Moreover, answers produced by the SI framework are accompanied by a\ncausal natural-language-based reasoning trace, which has important implications\nfor the safety and trustworthiness of the system.", + "authors": "Antonia Creswell, Murray Shanahan, Irina Higgins", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.00114v1", + "title": "Show Your Work: Scratchpads for Intermediate Computation with Language Models", + "abstract": "Large pre-trained language models perform remarkably well on tasks that can\nbe done \"in one pass\", such as generating realistic text or synthesizing\ncomputer programs. However, they struggle with tasks that require unbounded\nmulti-step computation, such as adding integers or executing programs.\nSurprisingly, we find that these same models are able to perform complex\nmulti-step computations -- even in the few-shot regime -- when asked to perform\nthe operation \"step by step\", showing the results of intermediate computations.\nIn particular, we train transformers to perform multi-step computations by\nasking them to emit intermediate computation steps into a \"scratchpad\". On a\nseries of increasingly complex tasks ranging from long addition to the\nexecution of arbitrary programs, we show that scratchpads dramatically improve\nthe ability of language models to perform multi-step computations.", + "authors": "Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena", + "published": "2021-11-30", + "updated": "2021-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.12884v2", + "title": "NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints", + "abstract": "Conditional text generation often requires lexical constraints, i.e., which\nwords should or shouldn't be included in the output text. While the dominant\nrecipe for conditional text generation has been large-scale pretrained language\nmodels that are finetuned on the task-specific training data, such models do\nnot learn to follow the underlying constraints reliably, even when supervised\nwith large amounts of task-specific examples.\n We propose NeuroLogic Decoding, a simple yet effective algorithm that enables\nneural language models -- supervised or not -- to generate fluent text while\nsatisfying complex lexical constraints. Our approach is powerful yet efficient.\nIt handles any set of lexical constraints that is expressible under predicate\nlogic, while its asymptotic runtime is equivalent to conventional beam search.\n Empirical results on four benchmarks show that NeuroLogic Decoding\noutperforms previous approaches, including algorithms that handle a subset of\nour constraints. Moreover, we find that unsupervised models with NeuroLogic\nDecoding often outperform supervised models with conventional decoding, even\nwhen the latter is based on considerably larger networks. Our results suggest\nthe limit of large-scale neural networks for fine-grained controllable\ngeneration and the promise of inference-time algorithms.", + "authors": "Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi", + "published": "2020-10-24", + "updated": "2021-04-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.04371v6", + "title": "Cumulative Reasoning with Large Language Models", + "abstract": "Despite the recent advancements in language models (LMs), their ability to\nsolve complex problems remains limited. This paper introduces Cumulative\nReasoning (CR), a novel approach that utilizes LMs cumulatively and\niteratively, mirroring human thought processes for problem-solving. CR\ndecomposes tasks into smaller, manageable components and leverages previous\npropositions for effective composition, significantly enhancing problem-solving\ncapabilities. We demonstrate CR's superiority through several complex reasoning\ntasks: it outperforms existing methods in logical inference tasks with up to a\n9.3% improvement, achieving 98.04% accuracy on the curated FOLIO wiki dataset.\nIn the Game of 24, it achieves 98% accuracy, marking a 24% improvement over the\nprior state-of-the-art. Additionally, CR sets new state-of-the-art on the MATH\ndataset, achieving a 4.2% increase from previous methods and a 43% relative\nimprovement in the most challenging problems. By extending CR to incorporate a\ncode environment without external aids like retrieval or web browsing, we\nfurther harness the computational and logical reasoning capabilities of LMs,\nachieving a remarkable 72.2% accuracy on the MATH dataset and outperforming the\nPAL/PoT method by 38.8%. Our work not only sets new state-of-the-art but also\npaves the way toward more sophisticated AI reasoning methods. The code is\navailable at https://github.com/iiis-ai/cumulative-reasoning.", + "authors": "Yifan Zhang, Jingqin Yang, Yang Yuan, Andrew Chi-Chih Yao", + "published": "2023-08-08", + "updated": "2024-04-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.11916v4", + "title": "Large Language Models are Zero-Shot Reasoners", + "abstract": "Pretrained large language models (LLMs) are widely used in many sub-fields of\nnatural language processing (NLP) and generally known as excellent few-shot\nlearners with task-specific exemplars. Notably, chain of thought (CoT)\nprompting, a recent technique for eliciting complex multi-step reasoning\nthrough step-by-step answer examples, achieved the state-of-the-art\nperformances in arithmetics and symbolic reasoning, difficult system-2 tasks\nthat do not follow the standard scaling laws for LLMs. While these successes\nare often attributed to LLMs' ability for few-shot learning, we show that LLMs\nare decent zero-shot reasoners by simply adding \"Let's think step by step\"\nbefore each answer. Experimental results demonstrate that our Zero-shot-CoT,\nusing the same single prompt template, significantly outperforms zero-shot LLM\nperformances on diverse benchmark reasoning tasks including arithmetics\n(MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin\nFlip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled\nObjects), without any hand-crafted few-shot examples, e.g. increasing the\naccuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with\nlarge InstructGPT model (text-davinci-002), as well as similar magnitudes of\nimprovements with another off-the-shelf large model, 540B parameter PaLM. The\nversatility of this single prompt across very diverse reasoning tasks hints at\nuntapped and understudied fundamental zero-shot capabilities of LLMs,\nsuggesting high-level, multi-task broad cognitive capabilities may be extracted\nby simple prompting. We hope our work not only serves as the minimal strongest\nzero-shot baseline for the challenging reasoning benchmarks, but also\nhighlights the importance of carefully exploring and analyzing the enormous\nzero-shot knowledge hidden inside LLMs before crafting finetuning datasets or\nfew-shot exemplars.", + "authors": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa", + "published": "2022-05-24", + "updated": "2023-01-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.12147v2", + "title": "LogiCoT: Logical Chain-of-Thought Instruction-Tuning", + "abstract": "Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive\nchain-of-thought reasoning ability. Recent work on self-instruction tuning,\nsuch as Alpaca, has focused on enhancing the general proficiency of models.\nThese instructions enable the model to achieve performance comparable to\nGPT-3.5 on general tasks like open-domain text generation and paraphrasing.\nHowever, they fall short of helping the model handle complex reasoning tasks.\nTo bridge the gap, this paper presents LogiCoT, a new instruction-tuning\ndataset for Logical Chain-of-Thought reasoning with GPT-4. We elaborate on the\nprocess of harvesting instructions for prompting GPT-4 to generate\nchain-of-thought rationales. LogiCoT serves as an instruction set for teaching\nmodels of logical reasoning and elicits general reasoning skills.", + "authors": "Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang", + "published": "2023-05-20", + "updated": "2023-10-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.15864v1", + "title": "Peano: Learning Formal Mathematical Reasoning", + "abstract": "General mathematical reasoning is computationally undecidable, but humans\nroutinely solve new problems. Moreover, discoveries developed over centuries\nare taught to subsequent generations quickly. What structure enables this, and\nhow might that inform automated mathematical reasoning? We posit that central\nto both puzzles is the structure of procedural abstractions underlying\nmathematics. We explore this idea in a case study on 5 sections of beginning\nalgebra on the Khan Academy platform. To define a computational foundation, we\nintroduce Peano, a theorem-proving environment where the set of valid actions\nat any point is finite. We use Peano to formalize introductory algebra problems\nand axioms, obtaining well-defined search problems. We observe existing\nreinforcement learning methods for symbolic reasoning to be insufficient to\nsolve harder problems. Adding the ability to induce reusable abstractions\n(\"tactics\") from its own solutions allows an agent to make steady progress,\nsolving all problems. Furthermore, these abstractions induce an order to the\nproblems, seen at random during training. The recovered order has significant\nagreement with the expert-designed Khan Academy curriculum, and\nsecond-generation agents trained on the recovered curriculum learn\nsignificantly faster. These results illustrate the synergistic role of\nabstractions and curricula in the cultural transmission of mathematics.", + "authors": "Gabriel Poesia, Noah D. Goodman", + "published": "2022-11-29", + "updated": "2022-11-29", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.09278v2", + "title": "Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models", + "abstract": "Although Large Language Models (LLMs) demonstrate remarkable ability in\nprocessing and generating human-like text, they do have limitations when it\ncomes to comprehending and expressing world knowledge that extends beyond the\nboundaries of natural language(e.g., chemical molecular formula). Injecting a\ncollection of symbolic data directly into the training of LLMs can be\nproblematic, as it disregards the synergies among different symbolic families\nand overlooks the need for a balanced mixture of natural and symbolic data. In\nthis work, we tackle these challenges from both a data and framework\nperspective and introduce Symbol-LLM series models. First, we curated a data\ncollection consisting of 34 tasks and incorporating approximately 20 distinct\nsymbolic families, intending to capture the interrelations and foster synergies\nbetween symbols. Then, a two-stage tuning framework succeeds in injecting\nsymbolic knowledge without loss of the generality ability. Extensive\nexperiments on both symbol- and NL-centric tasks demonstrate the balanced and\nsuperior performances of Symbol-LLM series models. The project page is\nhttps://xufangzhi.github.io/symbol-llm-page/.", + "authors": "Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, Jun Liu", + "published": "2023-11-15", + "updated": "2024-02-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.10601v2", + "title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models", + "abstract": "Language models are increasingly being deployed for general problem solving\nacross a wide range of tasks, but are still confined to token-level,\nleft-to-right decision-making processes during inference. This means they can\nfall short in tasks that require exploration, strategic lookahead, or where\ninitial decisions play a pivotal role. To surmount these challenges, we\nintroduce a new framework for language model inference, Tree of Thoughts (ToT),\nwhich generalizes over the popular Chain of Thought approach to prompting\nlanguage models, and enables exploration over coherent units of text (thoughts)\nthat serve as intermediate steps toward problem solving. ToT allows LMs to\nperform deliberate decision making by considering multiple different reasoning\npaths and self-evaluating choices to decide the next course of action, as well\nas looking ahead or backtracking when necessary to make global choices. Our\nexperiments show that ToT significantly enhances language models'\nproblem-solving abilities on three novel tasks requiring non-trivial planning\nor search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in\nGame of 24, while GPT-4 with chain-of-thought prompting only solved 4% of\ntasks, our method achieved a success rate of 74%. Code repo with all prompts:\nhttps://github.com/princeton-nlp/tree-of-thought-llm.", + "authors": "Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan", + "published": "2023-05-17", + "updated": "2023-12-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.12150v1", + "title": "Your Large Language Model is Secretly a Fairness Proponent and You Should Prompt it Like One", + "abstract": "The widespread adoption of large language models (LLMs) underscores the\nurgent need to ensure their fairness. However, LLMs frequently present dominant\nviewpoints while ignoring alternative perspectives from minority parties,\nresulting in potential biases. We hypothesize that these fairness-violating\nbehaviors occur because LLMs express their viewpoints using a human personality\nthat represents the majority of training data. In response to this, we validate\nthat prompting LLMs with specific roles can allow LLMs to express diverse\nviewpoints. Building on this insight and observation, we develop FairThinking,\na pipeline designed to automatically generate roles that enable LLMs to\narticulate diverse perspectives for fair expressions. To evaluate FairThinking,\nwe create a dataset with a thousand items covering three fairness-related\ntopics and conduct experiments on GPT-3.5, GPT-4, Llama2, and Mistral to\ndemonstrate its superior performance.", + "authors": "Tianlin Li, Xiaoyu Zhang, Chao Du, Tianyu Pang, Qian Liu, Qing Guo, Chao Shen, Yang Liu", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2; J.4" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.16343v2", + "title": "Evaluating, Understanding, and Improving Constrained Text Generation for Large Language Models", + "abstract": "Advancements in natural language generation (NLG) and large language models\n(LLMs) have led to proficient text generation in various tasks. However,\nintegrating intricate constraints into neural text generation, due to LLMs'\nopacity, remains challenging. This study investigates constrained text\ngeneration for LLMs, where predefined constraints are applied during LLM's\ngeneration process. Our research mainly focuses on mainstream open-source LLMs,\ncategorizing constraints into lexical, structural, and relation-based types. We\nalso present various benchmarks to facilitate fair evaluation. The study\naddresses some key research questions, including evaluating, understanding and\nimproving constrained text generation for LLMs. Results illuminate LLMs'\ncapacity and deficiency to incorporate constraints and provide insights for\nfuture developments in constrained text generation. Codes and datasets will be\nreleased upon acceptance.", + "authors": "Xiang Chen, Xiaojun Wan", + "published": "2023-10-25", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.19465v1", + "title": "Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models", + "abstract": "Ensuring the trustworthiness of large language models (LLMs) is crucial. Most\nstudies concentrate on fully pre-trained LLMs to better understand and improve\nLLMs' trustworthiness. In this paper, to reveal the untapped potential of\npre-training, we pioneer the exploration of LLMs' trustworthiness during this\nperiod, focusing on five key dimensions: reliability, privacy, toxicity,\nfairness, and robustness. To begin with, we apply linear probing to LLMs. The\nhigh probing accuracy suggests that \\textit{LLMs in early pre-training can\nalready distinguish concepts in each trustworthiness dimension}. Therefore, to\nfurther uncover the hidden possibilities of pre-training, we extract steering\nvectors from a LLM's pre-training checkpoints to enhance the LLM's\ntrustworthiness. Finally, inspired by~\\citet{choi2023understanding} that mutual\ninformation estimation is bounded by linear probing accuracy, we also probe\nLLMs with mutual information to investigate the dynamics of trustworthiness\nduring pre-training. We are the first to observe a similar two-phase\nphenomenon: fitting and compression~\\citep{shwartz2017opening}. This research\nprovides an initial exploration of trustworthiness modeling during LLM\npre-training, seeking to unveil new insights and spur further developments in\nthe field. We will make our code publicly accessible at\n\\url{https://github.com/ChnQ/TracingLLM}.", + "authors": "Chen Qian, Jie Zhang, Wei Yao, Dongrui Liu, Zhenfei Yin, Yu Qiao, Yong Liu, Jing Shao", + "published": "2024-02-29", + "updated": "2024-02-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.15215v1", + "title": "Item-side Fairness of Large Language Model-based Recommendation System", + "abstract": "Recommendation systems for Web content distribution intricately connect to\nthe information access and exposure opportunities for vulnerable populations.\nThe emergence of Large Language Models-based Recommendation System (LRS) may\nintroduce additional societal challenges to recommendation systems due to the\ninherent biases in Large Language Models (LLMs). From the perspective of\nitem-side fairness, there remains a lack of comprehensive investigation into\nthe item-side fairness of LRS given the unique characteristics of LRS compared\nto conventional recommendation systems. To bridge this gap, this study examines\nthe property of LRS with respect to item-side fairness and reveals the\ninfluencing factors of both historical users' interactions and inherent\nsemantic biases of LLMs, shedding light on the need to extend conventional\nitem-side fairness methods for LRS. Towards this goal, we develop a concise and\neffective framework called IFairLRS to enhance the item-side fairness of an\nLRS. IFairLRS covers the main stages of building an LRS with specifically\nadapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS\nto fine-tune LLaMA, a representative LLM, on \\textit{MovieLens} and\n\\textit{Steam} datasets, and observe significant item-side fairness\nimprovements. The code can be found in\nhttps://github.com/JiangM-C/IFairLRS.git.", + "authors": "Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.17553v1", + "title": "RuBia: A Russian Language Bias Detection Dataset", + "abstract": "Warning: this work contains upsetting or disturbing content.\n Large language models (LLMs) tend to learn the social and cultural biases\npresent in the raw pre-training data. To test if an LLM's behavior is fair,\nfunctional datasets are employed, and due to their purpose, these datasets are\nhighly language and culture-specific. In this paper, we address a gap in the\nscope of multilingual bias evaluation by presenting a bias detection dataset\nspecifically designed for the Russian language, dubbed as RuBia. The RuBia\ndataset is divided into 4 domains: gender, nationality, socio-economic status,\nand diverse, each of the domains is further divided into multiple fine-grained\nsubdomains. Every example in the dataset consists of two sentences with the\nfirst reinforcing a potentially harmful stereotype or trope and the second\ncontradicting it. These sentence pairs were first written by volunteers and\nthen validated by native-speaking crowdsourcing workers. Overall, there are\nnearly 2,000 unique sentence pairs spread over 19 subdomains in RuBia. To\nillustrate the dataset's purpose, we conduct a diagnostic evaluation of\nstate-of-the-art or near-state-of-the-art LLMs and discuss the LLMs'\npredisposition to social biases.", + "authors": "Veronika Grigoreva, Anastasiia Ivanova, Ilseyar Alimova, Ekaterina Artemova", + "published": "2024-03-26", + "updated": "2024-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.10199v3", + "title": "CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting", + "abstract": "As the utilization of large language models (LLMs) has proliferated\nworldwide, it is crucial for them to have adequate knowledge and fair\nrepresentation for diverse global cultures. In this work, we uncover culture\nperceptions of three SOTA models on 110 countries and regions on 8\nculture-related topics through culture-conditioned generations, and extract\nsymbols from these generations that are associated to each culture by the LLM.\nWe discover that culture-conditioned generation consist of linguistic \"markers\"\nthat distinguish marginalized cultures apart from default cultures. We also\ndiscover that LLMs have an uneven degree of diversity in the culture symbols,\nand that cultures from different geographic regions have different presence in\nLLMs' culture-agnostic generation. Our findings promote further research in\nstudying the knowledge and fairness of global culture perception in LLMs. Code\nand Data can be found in: https://github.com/huihanlhh/Culture-Gen/", + "authors": "Huihan Li, Liwei Jiang, Jena D. Huang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi", + "published": "2024-04-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.19118v1", + "title": "Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate", + "abstract": "Modern large language models (LLMs) like ChatGPT have shown remarkable\nperformance on general language tasks but still struggle on complex reasoning\ntasks, which drives the research on cognitive behaviors of LLMs to explore\nhuman-like problem-solving strategies. Along this direction, one representative\nstrategy is self-reflection, which asks an LLM to refine the solution with the\nfeedback generated by itself iteratively. However, our study shows that such\nreflection-style methods suffer from the Degeneration-of-Thought (DoT) problem:\nonce the LLM has established confidence in its solutions, it is unable to\ngenerate novel thoughts later through reflection even if its initial stance is\nincorrect. To address the DoT problem, we propose a Multi-Agent Debate (MAD)\nframework, in which multiple agents express their arguments in the state of\n\"tit for tat\" and a judge manages the debate process to obtain a final\nsolution. Clearly, our MAD framework encourages divergent thinking in LLMs\nwhich would be helpful for tasks that require deep levels of contemplation.\nExperiment results on two challenging datasets, commonsense machine translation\nand counter-intuitive arithmetic reasoning, demonstrate the effectiveness of\nour MAD framework. Extensive analyses suggest that the adaptive break of debate\nand the modest level of \"tit for tat\" state are required for MAD to obtain good\nperformance. Moreover, we find that LLMs might not be a fair judge if different\nLLMs are used for agents. Codes:\nhttps://github.com/Skytliang/Multi-Agents-Debate", + "authors": "Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00588v1", + "title": "Fairness in Serving Large Language Models", + "abstract": "High-demand LLM inference services (e.g., ChatGPT and BARD) support a wide\nrange of requests from short chat conversations to long document reading. To\nensure that all client requests are processed fairly, most major LLM inference\nservices have request rate limits, to ensure that no client can dominate the\nrequest queue. However, this rudimentary notion of fairness also results in\nunder-utilization of the resources and poor client experience when there is\nspare capacity. While there is a rich literature on fair scheduling, serving\nLLMs presents new challenges due to their unpredictable request lengths and\ntheir unique batching characteristics on parallel accelerators. This paper\nintroduces the definition of LLM serving fairness based on a cost function that\naccounts for the number of input and output tokens processed. To achieve\nfairness in serving, we propose a novel scheduling algorithm, the Virtual Token\nCounter (VTC), a fair scheduler based on the continuous batching mechanism. We\nprove a 2x tight upper bound on the service difference between two backlogged\nclients, adhering to the requirement of work-conserving. Through extensive\nexperiments, we demonstrate the superior performance of VTC in ensuring\nfairness, especially in contrast to other baseline methods, which exhibit\nshortcomings under various conditions.", + "authors": "Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.PF" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.01937v1", + "title": "Can Large Language Models Be an Alternative to Human Evaluations?", + "abstract": "Human evaluation is indispensable and inevitable for assessing the quality of\ntexts generated by machine learning models or written by humans. However, human\nevaluation is very difficult to reproduce and its quality is notoriously\nunstable, hindering fair comparisons among different natural language\nprocessing (NLP) models and algorithms. Recently, large language models (LLMs)\nhave demonstrated exceptional performance on unseen tasks when only the task\ninstructions are provided. In this paper, we explore if such an ability of the\nLLMs can be used as an alternative to human evaluation. We present the LLMs\nwith the exact same instructions, samples to be evaluated, and questions used\nto conduct human evaluation, and then ask the LLMs to generate responses to\nthose questions; we dub this LLM evaluation. We use human evaluation and LLM\nevaluation to evaluate the texts in two NLP tasks: open-ended story generation\nand adversarial attacks. We show that the result of LLM evaluation is\nconsistent with the results obtained by expert human evaluation: the texts\nrated higher by human experts are also rated higher by the LLMs. We also find\nthat the results of LLM evaluation are stable over different formatting of the\ntask instructions and the sampling algorithm used to generate the answer. We\nare the first to show the potential of using LLMs to assess the quality of\ntexts and discuss the limitations and ethical considerations of LLM evaluation.", + "authors": "Cheng-Han Chiang, Hung-yi Lee", + "published": "2023-05-03", + "updated": "2023-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.18502v1", + "title": "Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification", + "abstract": "Employing Large Language Models (LLM) in various downstream applications such\nas classification is crucial, especially for smaller companies lacking the\nexpertise and resources required for fine-tuning a model. Fairness in LLMs\nhelps ensure inclusivity, equal representation based on factors such as race,\ngender and promotes responsible AI deployment. As the use of LLMs has become\nincreasingly prevalent, it is essential to assess whether LLMs can generate\nfair outcomes when subjected to considerations of fairness. In this study, we\nintroduce a framework outlining fairness regulations aligned with various\nfairness definitions, with each definition being modulated by varying degrees\nof abstraction. We explore the configuration for in-context learning and the\nprocedure for selecting in-context demonstrations using RAG, while\nincorporating fairness rules into the process. Experiments conducted with\ndifferent LLMs indicate that GPT-4 delivers superior results in terms of both\naccuracy and fairness compared to other models. This work is one of the early\nattempts to achieve fairness in prediction tasks by utilizing LLMs through\nin-context learning.", + "authors": "Garima Chhikara, Anurag Sharma, Kripabandhu Ghosh, Abhijnan Chakraborty", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.08836v2", + "title": "Bias and Fairness in Chatbots: An Overview", + "abstract": "Chatbots have been studied for more than half a century. With the rapid\ndevelopment of natural language processing (NLP) technologies in recent years,\nchatbots using large language models (LLMs) have received much attention\nnowadays. Compared with traditional ones, modern chatbots are more powerful and\nhave been used in real-world applications. There are however, bias and fairness\nconcerns in modern chatbot design. Due to the huge amounts of training data,\nextremely large model sizes, and lack of interpretability, bias mitigation and\nfairness preservation of modern chatbots are challenging. Thus, a comprehensive\noverview on bias and fairness in chatbot systems is given in this paper. The\nhistory of chatbots and their categories are first reviewed. Then, bias sources\nand potential harms in applications are analyzed. Considerations in designing\nfair and unbiased chatbot systems are examined. Finally, future research\ndirections are discussed.", + "authors": "Jintang Xue, Yun-Cheng Wang, Chengwei Wei, Xiaofeng Liu, Jonghye Woo, C. -C. Jay Kuo", + "published": "2023-09-16", + "updated": "2023-12-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.08495v2", + "title": "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans", + "abstract": "Large language models (LLMs) are becoming pervasive in everyday life, yet\ntheir propensity to reproduce biases inherited from training data remains a\npressing concern. Prior investigations into bias in LLMs have focused on the\nassociation of social groups with stereotypical attributes. However, this is\nonly one form of human bias such systems may reproduce. We investigate a new\nform of bias in LLMs that resembles a social psychological phenomenon where\nsocially subordinate groups are perceived as more homogeneous than socially\ndominant groups. We had ChatGPT, a state-of-the-art LLM, generate texts about\nintersectional group identities and compared those texts on measures of\nhomogeneity. We consistently found that ChatGPT portrayed African, Asian, and\nHispanic Americans as more homogeneous than White Americans, indicating that\nthe model described racial minority groups with a narrower range of human\nexperience. ChatGPT also portrayed women as more homogeneous than men, but\nthese differences were small. Finally, we found that the effect of gender\ndiffered across racial/ethnic groups such that the effect of gender was\nconsistent within African and Hispanic Americans but not within Asian and White\nAmericans. We argue that the tendency of LLMs to describe groups as less\ndiverse risks perpetuating stereotypes and discriminatory behavior.", + "authors": "Messi H. J. Lee, Jacob M. Montgomery, Calvin K. Lai", + "published": "2024-01-16", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.11406v2", + "title": "Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection", + "abstract": "The fairness and trustworthiness of Large Language Models (LLMs) are\nreceiving increasing attention. Implicit hate speech, which employs indirect\nlanguage to convey hateful intentions, occupies a significant portion of\npractice. However, the extent to which LLMs effectively address this issue\nremains insufficiently examined. This paper delves into the capability of LLMs\nto detect implicit hate speech (Classification Task) and express confidence in\ntheir responses (Calibration Task). Our evaluation meticulously considers\nvarious prompt patterns and mainstream uncertainty estimation methods. Our\nfindings highlight that LLMs exhibit two extremes: (1) LLMs display excessive\nsensitivity towards groups or topics that may cause fairness issues, resulting\nin misclassifying benign statements as hate speech. (2) LLMs' confidence scores\nfor each method excessively concentrate on a fixed range, remaining unchanged\nregardless of the dataset's complexity. Consequently, the calibration\nperformance is heavily reliant on primary classification accuracy. These\ndiscoveries unveil new limitations of LLMs, underscoring the need for caution\nwhen optimizing models to ensure they do not veer towards extremes. This serves\nas a reminder to carefully consider sensitivity and confidence in the pursuit\nof model fairness.", + "authors": "Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu", + "published": "2024-02-18", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.14345v2", + "title": "Bias Testing and Mitigation in LLM-based Code Generation", + "abstract": "Utilizing state-of-the-art Large Language Models (LLMs), automatic code\ngeneration models play a pivotal role in enhancing the productivity of software\ndevelopment procedures. As the adoption of LLMs becomes more widespread in\nsoftware coding ecosystems, a pressing issue has emerged: does the generated\ncode contain social bias and unfairness, such as those related to age, gender,\nand race? This issue concerns the integrity, fairness, and ethical foundation\nof software applications that depend on the code generated by these models, yet\nis under-explored in the literature. This paper presents a novel bias testing\nframework that is specifically designed for code generation tasks. Based on\nthis framework, we conduct an extensive evaluation of the bias in code\ngenerated by five state-of-the-art LLMs. Our findings reveal that 20.29% to\n44.93% code functions generated by the models under study are biased when\nhandling bias sensitive tasks (i.e., tasks that involve sensitive attributes\nsuch as age and gender). This indicates that the existing LLMs can be unfair in\ncode generation, posing risks of unintended and harmful software behaviors. To\nmitigate bias for code generation models, we evaluate five bias mitigation\nprompt strategies, i.e., utilizing bias testing results to refine the code\n(zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our\nevaluation results illustrate that these strategies are all effective in\nmitigating bias. Overall, one-shot and few-shot learning are the two most\neffective. For GPT-4, 80% to 90% code bias can be removed with one-shot\nlearning.", + "authors": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, Heming Cui", + "published": "2023-09-03", + "updated": "2024-01-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.11595v3", + "title": "Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate", + "abstract": "Large Language Models (LLMs) have shown impressive capabilities in various\napplications, but they still face various inconsistency issues. Existing works\nprimarily focus on the inconsistency issues within a single LLM, while we\ncomplementarily explore the inter-consistency among multiple LLMs for\ncollaboration. To examine whether LLMs can collaborate effectively to achieve a\nconsensus for a shared goal, we focus on commonsense reasoning, and introduce a\nformal debate framework (FORD) to conduct a three-stage debate among LLMs with\nreal-world scenarios alignment: fair debate, mismatched debate, and roundtable\ndebate. Through extensive experiments on various datasets, LLMs can effectively\ncollaborate to reach a consensus despite noticeable inter-inconsistencies, but\nimbalances in their abilities can lead to domination by superior LLMs.\nLeveraging a more advanced LLM like GPT-4 as an authoritative judge can boost\ncollaboration performance. Our work contributes to understanding the\ninter-consistency among LLMs and lays the foundation for developing future\ncollaboration methods. Codes and data are available at\nhttps://github.com/Waste-Wood/FORD", + "authors": "Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin", + "published": "2023-05-19", + "updated": "2023-10-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.12736v1", + "title": "Large Language Model Supply Chain: A Research Agenda", + "abstract": "The rapid advancements in pre-trained Large Language Models (LLMs) and Large\nMultimodal Models (LMMs) have ushered in a new era of intelligent applications,\ntransforming fields ranging from natural language processing to content\ngeneration. The LLM supply chain represents a crucial aspect of the\ncontemporary artificial intelligence landscape. It encompasses the entire\nlifecycle of pre-trained models, from its initial development and training to\nits final deployment and application in various domains. This paper presents a\ncomprehensive overview of the LLM supply chain, highlighting its three core\nelements: 1) the model infrastructure, encompassing datasets and toolchain for\ntraining, optimization, and deployment; 2) the model lifecycle, covering\ntraining, testing, releasing, and ongoing maintenance; and 3) the downstream\napplication ecosystem, enabling the integration of pre-trained models into a\nwide range of intelligent applications. However, this rapidly evolving field\nfaces numerous challenges across these key components, including data privacy\nand security, model interpretability and fairness, infrastructure scalability,\nand regulatory compliance. Addressing these challenges is essential for\nharnessing the full potential of LLMs and ensuring their ethical and\nresponsible use. This paper provides a future research agenda for the LLM\nsupply chain, aiming at driving the continued advancement and responsible\ndeployment of these transformative LLMs.", + "authors": "Shenao Wang, Yanjie Zhao, Xinyi Hou, Haoyu Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.01349v1", + "title": "Fairness in Large Language Models: A Taxonomic Survey", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable success across\nvarious domains. However, despite their promising performance in numerous\nreal-world applications, most of these algorithms lack fairness considerations.\nConsequently, they may lead to discriminatory outcomes against certain\ncommunities, particularly marginalized populations, prompting extensive study\nin fair LLMs. On the other hand, fairness in LLMs, in contrast to fairness in\ntraditional machine learning, entails exclusive backgrounds, taxonomies, and\nfulfillment techniques. To this end, this survey presents a comprehensive\noverview of recent advances in the existing literature concerning fair LLMs.\nSpecifically, a brief introduction to LLMs is provided, followed by an analysis\nof factors contributing to bias in LLMs. Additionally, the concept of fairness\nin LLMs is discussed categorically, summarizing metrics for evaluating bias in\nLLMs and existing algorithms for promoting fairness. Furthermore, resources for\nevaluating bias in LLMs, including toolkits and datasets, are summarized.\nFinally, existing research challenges and open questions are discussed.", + "authors": "Zhibo Chu, Zichong Wang, Wenbin Zhang", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05345v3", + "title": "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model", + "abstract": "Inspired by the recent success of large language models (LLMs) like ChatGPT,\nresearchers start to explore the adoption of LLMs for agile hardware design,\nsuch as generating design RTL based on natural-language instructions. However,\nin existing works, their target designs are all relatively simple and in a\nsmall scale, and proposed by the authors themselves, making a fair comparison\namong different LLM solutions challenging. In addition, many prior works only\nfocus on the design correctness, without evaluating the design qualities of\ngenerated design RTL. In this work, we propose an open-source benchmark named\nRTLLM, for generating design RTL with natural language instructions. To\nsystematically evaluate the auto-generated design RTL, we summarized three\nprogressive goals, named syntax goal, functionality goal, and design quality\ngoal. This benchmark can automatically provide a quantitative evaluation of any\ngiven LLM-based solution. Furthermore, we propose an easy-to-use yet\nsurprisingly effective prompt engineering technique named self-planning, which\nproves to significantly boost the performance of GPT-3.5 in our proposed\nbenchmark.", + "authors": "Yao Lu, Shang Liu, Qijun Zhang, Zhiyao Xie", + "published": "2023-08-10", + "updated": "2023-11-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02049v1", + "title": "Post Turing: Mapping the landscape of LLM Evaluation", + "abstract": "In the rapidly evolving landscape of Large Language Models (LLMs),\nintroduction of well-defined and standardized evaluation methodologies remains\na crucial challenge. This paper traces the historical trajectory of LLM\nevaluations, from the foundational questions posed by Alan Turing to the modern\nera of AI research. We categorize the evolution of LLMs into distinct periods,\neach characterized by its unique benchmarks and evaluation criteria. As LLMs\nincreasingly mimic human-like behaviors, traditional evaluation proxies, such\nas the Turing test, have become less reliable. We emphasize the pressing need\nfor a unified evaluation system, given the broader societal implications of\nthese models. Through an analysis of common evaluation methodologies, we\nadvocate for a qualitative shift in assessment approaches, underscoring the\nimportance of standardization and objective criteria. This work serves as a\ncall for the AI community to collaboratively address the challenges of LLM\nevaluation, ensuring their reliability, fairness, and societal benefit.", + "authors": "Alexey Tikhonov, Ivan P. Yamshchikov", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08517v1", + "title": "Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward", + "abstract": "While Large Language Models (LLMs) have seen widespread applications across\nnumerous fields, their limited interpretability poses concerns regarding their\nsafe operations from multiple aspects, e.g., truthfulness, robustness, and\nfairness. Recent research has started developing quality assurance methods for\nLLMs, introducing techniques such as offline detector-based or uncertainty\nestimation methods. However, these approaches predominantly concentrate on\npost-generation analysis, leaving the online safety analysis for LLMs during\nthe generation phase an unexplored area. To bridge this gap, we conduct in this\nwork a comprehensive evaluation of the effectiveness of existing online safety\nanalysis methods on LLMs. We begin with a pilot study that validates the\nfeasibility of detecting unsafe outputs in the early generation process.\nFollowing this, we establish the first publicly available benchmark of online\nsafety analysis for LLMs, including a broad spectrum of methods, models, tasks,\ndatasets, and evaluation metrics. Utilizing this benchmark, we extensively\nanalyze the performance of state-of-the-art online safety analysis methods on\nboth open-source and closed-source LLMs. This analysis reveals the strengths\nand weaknesses of individual methods and offers valuable insights into\nselecting the most appropriate method based on specific application scenarios\nand task requirements. Furthermore, we also explore the potential of using\nhybridization methods, i.e., combining multiple methods to derive a collective\nsafety conclusion, to enhance the efficacy of online safety analysis for LLMs.\nOur findings indicate a promising direction for the development of innovative\nand trustworthy quality assurance methodologies for LLMs, facilitating their\nreliable deployments across diverse domains.", + "authors": "Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14769v3", + "title": "Large Language Model (LLM) Bias Index -- LLMBI", + "abstract": "The Large Language Model Bias Index (LLMBI) is a pioneering approach designed\nto quantify and address biases inherent in large language models (LLMs), such\nas GPT-4. We recognise the increasing prevalence and impact of LLMs across\ndiverse sectors. This research introduces a novel metric, LLMBI, to\nsystematically measure and mitigate biases potentially skewing model responses.\nWe formulated LLMBI using a composite scoring system incorporating multiple\ndimensions of bias, including but not limited to age, gender, and racial\nbiases. To operationalise this metric, we engaged in a multi-step process\ninvolving collecting and annotating LLM responses, applying sophisticated\nNatural Language Processing (NLP) techniques for bias detection, and computing\nthe LLMBI score through a specially crafted mathematical formula. The formula\nintegrates weighted averages of various bias dimensions, a penalty for dataset\ndiversity deficiencies, and a correction for sentiment biases. Our empirical\nanalysis, conducted using responses from OpenAI's API, employs advanced\nsentiment analysis as a representative method for bias detection. The research\nreveals LLMs, whilst demonstrating impressive capabilities in text generation,\nexhibit varying degrees of bias across different dimensions. LLMBI provides a\nquantifiable measure to compare biases across models and over time, offering a\nvital tool for systems engineers, researchers and regulators in enhancing the\nfairness and reliability of LLMs. It highlights the potential of LLMs in\nmimicking unbiased human-like responses. Additionally, it underscores the\nnecessity of continuously monitoring and recalibrating such models to align\nwith evolving societal norms and ethical standards.", + "authors": "Abiodun Finbarrs Oketunji, Muhammad Anas, Deepthi Saina", + "published": "2023-12-22", + "updated": "2023-12-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10397v2", + "title": "FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models", + "abstract": "Detecting stereotypes and biases in Large Language Models (LLMs) can enhance\nfairness and reduce adverse impacts on individuals or groups when these LLMs\nare applied. However, the majority of existing methods focus on measuring the\nmodel's preference towards sentences containing biases and stereotypes within\ndatasets, which lacks interpretability and cannot detect implicit biases and\nstereotypes in the real world. To address this gap, this paper introduces a\nfour-stage framework to directly evaluate stereotypes and biases in the\ngenerated content of LLMs, including direct inquiry testing, serial or adapted\nstory testing, implicit association testing, and unknown situation testing.\nAdditionally, the paper proposes multi-dimensional evaluation metrics and\nexplainable zero-shot prompts for automated evaluation. Using the education\nsector as a case study, we constructed the Edu-FairMonitor based on the\nfour-stage framework, which encompasses 12,632 open-ended questions covering\nnine sensitive factors and 26 educational scenarios. Experimental results\nreveal varying degrees of stereotypes and biases in five LLMs evaluated on\nEdu-FairMonitor. Moreover, the results of our proposed automated evaluation\nmethod have shown a high correlation with human annotations.", + "authors": "Yanhong Bai, Jiabao Zhao, Jinxin Shi, Tingjiang Wei, Xingjiao Wu, Liang He", + "published": "2023-08-21", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.13343v1", + "title": "Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)", + "abstract": "With the development of large language models (LLMs) like the GPT series,\ntheir widespread use across various application scenarios presents a myriad of\nchallenges. This review initially explores the issue of domain specificity,\nwhere LLMs may struggle to provide precise answers to specialized questions\nwithin niche fields. The problem of knowledge forgetting arises as these LLMs\nmight find it hard to balance old and new information. The knowledge repetition\nphenomenon reveals that sometimes LLMs might deliver overly mechanized\nresponses, lacking depth and originality. Furthermore, knowledge illusion\ndescribes situations where LLMs might provide answers that seem insightful but\nare actually superficial, while knowledge toxicity focuses on harmful or biased\ninformation outputs. These challenges underscore problems in the training data\nand algorithmic design of LLMs. To address these issues, it's suggested to\ndiversify training data, fine-tune models, enhance transparency and\ninterpretability, and incorporate ethics and fairness training. Future\ntechnological trends might lean towards iterative methodologies, multimodal\nlearning, model personalization and customization, and real-time learning and\nfeedback mechanisms. In conclusion, future LLMs should prioritize fairness,\ntransparency, and ethics, ensuring they uphold high moral and ethical standards\nwhen serving humanity.", + "authors": "Xiaoliang Chen, Liangbin Li, Le Chang, Yunhe Huang, Yuxuan Zhao, Yuxiao Zhang, Dinuo Li", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.07420v1", + "title": "FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs", + "abstract": "Training large language models (LLMs) is a costly endeavour in terms of time\nand computational resources. The large amount of training data used during the\nunsupervised pre-training phase makes it difficult to verify all data and,\nunfortunately, undesirable data may be ingested during training. Re-training\nfrom scratch is impractical and has led to the creation of the 'unlearning'\ndiscipline where models are modified to \"unlearn\" undesirable information\nwithout retraining. However, any modification can alter the behaviour of LLMs,\nespecially on key dimensions such as fairness. This is the first work that\nexamines this interplay between unlearning and fairness for LLMs. In\nparticular, we focus on a popular unlearning framework known as SISA [Bourtoule\net al., 2021], which creates an ensemble of models trained on disjoint shards.\nWe evaluate the performance-fairness trade-off for SISA, and empirically\ndemsontrate that SISA can indeed reduce fairness in LLMs. To remedy this, we\npropose post-processing bias mitigation techniques for ensemble models produced\nby SISA. We adapt the post-processing fairness improvement technique from\n[Hardt et al., 2016] to design three methods that can handle model ensembles,\nand prove that one of the methods is an optimal fair predictor for ensemble of\nmodels. Through experimental results, we demonstrate the efficacy of our\npost-processing framework called 'FairSISA'.", + "authors": "Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2304.03728v1", + "title": "Interpretable Unified Language Checking", + "abstract": "Despite recent concerns about undesirable behaviors generated by large\nlanguage models (LLMs), including non-factual, biased, and hateful language, we\nfind LLMs are inherent multi-task language checkers based on their latent\nrepresentations of natural and social knowledge. We present an interpretable,\nunified, language checking (UniLC) method for both human and machine-generated\nlanguage that aims to check if language input is factual and fair. While\nfairness and fact-checking tasks have been handled separately with dedicated\nmodels, we find that LLMs can achieve high performance on a combination of\nfact-checking, stereotype detection, and hate speech detection tasks with a\nsimple, few-shot, unified set of prompts. With the ``1/2-shot'' multi-task\nlanguage checking method proposed in this work, the GPT3.5-turbo model\noutperforms fully supervised baselines on several language tasks. The simple\napproach and results suggest that based on strong latent knowledge\nrepresentations, an LLM can be an adaptive and explainable tool for detecting\nmisinformation, stereotypes, and hate speech.", + "authors": "Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass", + "published": "2023-04-07", + "updated": "2023-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15198v2", + "title": "Do LLM Agents Exhibit Social Behavior?", + "abstract": "The advances of Large Language Models (LLMs) are expanding their utility in\nboth academic research and practical applications. Recent social science\nresearch has explored the use of these ``black-box'' LLM agents for simulating\ncomplex social systems and potentially substituting human subjects in\nexperiments. Our study delves into this emerging domain, investigating the\nextent to which LLMs exhibit key social interaction principles, such as social\nlearning, social preference, and cooperative behavior (indirect reciprocity),\nin their interactions with humans and other agents. We develop a framework for\nour study, wherein classical laboratory experiments involving human subjects\nare adapted to use LLM agents. This approach involves step-by-step reasoning\nthat mirrors human cognitive processes and zero-shot learning to assess the\ninnate preferences of LLMs. Our analysis of LLM agents' behavior includes both\nthe primary effects and an in-depth examination of the underlying mechanisms.\nFocusing on GPT-4, our analyses suggest that LLM agents appear to exhibit a\nrange of human-like social behaviors such as distributional and reciprocity\npreferences, responsiveness to group identity cues, engagement in indirect\nreciprocity, and social learning capabilities. However, our analysis also\nreveals notable differences: LLMs demonstrate a pronounced fairness preference,\nweaker positive reciprocity, and a more calculating approach in social learning\ncompared to humans. These insights indicate that while LLMs hold great promise\nfor applications in social science research, such as in laboratory experiments\nand agent-based modeling, the subtle behavioral differences between LLM agents\nand humans warrant further investigation. Careful examination and development\nof protocols in evaluating the social behaviors of LLMs are necessary before\ndirectly applying these models to emulate human behavior.", + "authors": "Yan Leng, Yuan Yuan", + "published": "2023-12-23", + "updated": "2024-02-22", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.SI", + "econ.GN", + "q-fin.EC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.09219v5", + "title": "\"Kelly is a Warm Person, Joseph is a Role Model\": Gender Biases in LLM-Generated Reference Letters", + "abstract": "Large Language Models (LLMs) have recently emerged as an effective tool to\nassist individuals in writing various types of content, including professional\ndocuments such as recommendation letters. Though bringing convenience, this\napplication also introduces unprecedented fairness concerns. Model-generated\nreference letters might be directly used by users in professional scenarios. If\nunderlying biases exist in these model-constructed letters, using them without\nscrutinization could lead to direct societal harms, such as sabotaging\napplication success rates for female applicants. In light of this pressing\nissue, it is imminent and necessary to comprehensively study fairness issues\nand associated harms in this real-world use case. In this paper, we critically\nexamine gender biases in LLM-generated reference letters. Drawing inspiration\nfrom social science findings, we design evaluation methods to manifest biases\nthrough 2 dimensions: (1) biases in language style and (2) biases in lexical\ncontent. We further investigate the extent of bias propagation by analyzing the\nhallucination bias of models, a term that we define to be bias exacerbation in\nmodel-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-\nChatGPT and Alpaca, we reveal significant gender biases in LLM-generated\nrecommendation letters. Our findings not only warn against using LLMs for this\napplication without scrutinization, but also illuminate the importance of\nthoroughly studying hidden biases and harms in LLM-generated professional\ndocuments.", + "authors": "Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng", + "published": "2023-10-13", + "updated": "2023-12-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.13862v2", + "title": "A Trip Towards Fairness: Bias and De-Biasing in Large Language Models", + "abstract": "Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training\nare emerging as the next big revolution in natural language processing and\nunderstanding. These CtB-LLMs are democratizing access to trainable Very\nLarge-Language Models (VLLMs) and, thus, may represent the building blocks of\nmany NLP systems solving downstream tasks. Hence, a little or a large bias in\nCtB-LLMs may cause huge harm. In this paper, we performed a large investigation\nof the bias of three families of CtB-LLMs, and we showed that debiasing\ntechniques are effective and usable. Indeed, according to current tests, the\nLLaMA and the OPT families have an important bias in gender, race, religion,\nand profession. In contrast to the analysis for other LLMs, we discovered that\nbias depends not on the number of parameters but on the perplexity. Finally,\nthe debiasing of OPT using LoRA reduces bias up to 4.12 points in the\nnormalized stereotype score.", + "authors": "Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto", + "published": "2023-05-23", + "updated": "2023-08-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.08780v1", + "title": "\"Im not Racist but...\": Discovering Bias in the Internal Knowledge of Large Language Models", + "abstract": "Large language models (LLMs) have garnered significant attention for their\nremarkable performance in a continuously expanding set of natural language\nprocessing tasks. However, these models have been shown to harbor inherent\nsocietal biases, or stereotypes, which can adversely affect their performance\nin their many downstream applications. In this paper, we introduce a novel,\npurely prompt-based approach to uncover hidden stereotypes within any arbitrary\nLLM. Our approach dynamically generates a knowledge representation of internal\nstereotypes, enabling the identification of biases encoded within the LLM's\ninternal knowledge. By illuminating the biases present in LLMs and offering a\nsystematic methodology for their analysis, our work contributes to advancing\ntransparency and promoting fairness in natural language processing systems.", + "authors": "Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter", + "published": "2023-10-13", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.03852v2", + "title": "FLM-101B: An Open LLM and How to Train It with $100K Budget", + "abstract": "Large language models (LLMs) have achieved remarkable success in NLP and\nmultimodal tasks, among others. Despite these successes, two main challenges\nremain in developing LLMs: (i) high computational cost, and (ii) fair and\nobjective evaluations. In this paper, we report a solution to significantly\nreduce LLM training cost through a growth strategy. We demonstrate that a\n101B-parameter LLM with 0.31T tokens can be trained with a budget of 100K US\ndollars. Inspired by IQ tests, we also consolidate an additional range of\nevaluations on top of existing evaluations that focus on knowledge-oriented\nabilities. These IQ evaluations include symbolic mapping, rule understanding,\npattern mining, and anti-interference. Such evaluations minimize the potential\nimpact of memorization. Experimental results show that our model, named\nFLM-101B, trained with a budget of 100K US dollars, achieves performance\ncomparable to powerful and well-known models, e.g., GPT-3 and GLM-130B,\nespecially on the additional range of IQ evaluations. The checkpoint of\nFLM-101B is released at https://huggingface.co/CofeAI/FLM-101B.", + "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Xuying Meng, Siqi Fan, Peng Han, Jing Li, Li Du, Bowen Qin, Zheng Zhang, Aixin Sun, Yequan Wang", + "published": "2023-09-07", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.00625v2", + "title": "Beyond Efficiency: A Systematic Survey of Resource-Efficient Large Language Models", + "abstract": "The burgeoning field of Large Language Models (LLMs), exemplified by\nsophisticated models like OpenAI's ChatGPT, represents a significant\nadvancement in artificial intelligence. These models, however, bring forth\nsubstantial challenges in the high consumption of computational, memory,\nenergy, and financial resources, especially in environments with limited\nresource capabilities. This survey aims to systematically address these\nchallenges by reviewing a broad spectrum of techniques designed to enhance the\nresource efficiency of LLMs. We categorize methods based on their optimization\nfocus: computational, memory, energy, financial, and network resources and\ntheir applicability across various stages of an LLM's lifecycle, including\narchitecture design, pretraining, finetuning, and system design. Additionally,\nthe survey introduces a nuanced categorization of resource efficiency\ntechniques by their specific resource types, which uncovers the intricate\nrelationships and mappings between various resources and corresponding\noptimization techniques. A standardized set of evaluation metrics and datasets\nis also presented to facilitate consistent and fair comparisons across\ndifferent models and techniques. By offering a comprehensive overview of the\ncurrent sota and identifying open research avenues, this survey serves as a\nfoundational reference for researchers and practitioners, aiding them in\ndeveloping more sustainable and efficient LLMs in a rapidly evolving landscape.", + "authors": "Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao", + "published": "2024-01-01", + "updated": "2024-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.10149v2", + "title": "A Survey on Fairness in Large Language Models", + "abstract": "Large Language Models (LLMs) have shown powerful performance and development\nprospects and are widely deployed in the real world. However, LLMs can capture\nsocial biases from unprocessed training data and propagate the biases to\ndownstream tasks. Unfair LLM systems have undesirable social impacts and\npotential harms. In this paper, we provide a comprehensive review of related\nresearch on fairness in LLMs. Considering the influence of parameter magnitude\nand training paradigm on research strategy, we divide existing fairness\nresearch into oriented to medium-sized LLMs under pre-training and fine-tuning\nparadigms and oriented to large-sized LLMs under prompting paradigms. First,\nfor medium-sized LLMs, we introduce evaluation metrics and debiasing methods\nfrom the perspectives of intrinsic bias and extrinsic bias, respectively. Then,\nfor large-sized LLMs, we introduce recent fairness research, including fairness\nevaluation, reasons for bias, and debiasing methods. Finally, we discuss and\nprovide insight on the challenges and future directions for the development of\nfairness in LLMs.", + "authors": "Yingji Li, Mengnan Du, Rui Song, Xin Wang, Ying Wang", + "published": "2023-08-20", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.10567v3", + "title": "InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?", + "abstract": "Recent advancements in language technology and Artificial Intelligence have\nresulted in numerous Language Models being proposed to perform various tasks in\nthe legal domain ranging from predicting judgments to generating summaries.\nDespite their immense potential, these models have been proven to learn and\nexhibit societal biases and make unfair predictions. In this study, we explore\nthe ability of Large Language Models (LLMs) to perform legal tasks in the\nIndian landscape when social factors are involved. We present a novel metric,\n$\\beta$-weighted $\\textit{Legal Safety Score ($LSS_{\\beta}$)}$, which\nencapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs'\nsafety by considering its performance in the $\\textit{Binary Statutory\nReasoning}$ task and its fairness exhibition with respect to various axes of\ndisparities in the Indian society. Task performance and fairness scores of\nLLaMA and LLaMA--2 models indicate that the proposed $LSS_{\\beta}$ metric can\neffectively determine the readiness of a model for safe usage in the legal\nsector. We also propose finetuning pipelines, utilising specialised legal\ndatasets, as a potential method to mitigate bias and improve model safety. The\nfinetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\\beta}$,\nimproving their usability in the Indian legal domain. Our code is publicly\nreleased.", + "authors": "Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru", + "published": "2024-02-16", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.13840v1", + "title": "Whose Side Are You On? Investigating the Political Stance of Large Language Models", + "abstract": "Large Language Models (LLMs) have gained significant popularity for their\napplication in various everyday tasks such as text generation, summarization,\nand information retrieval. As the widespread adoption of LLMs continues to\nsurge, it becomes increasingly crucial to ensure that these models yield\nresponses that are politically impartial, with the aim of preventing\ninformation bubbles, upholding fairness in representation, and mitigating\nconfirmation bias. In this paper, we propose a quantitative framework and\npipeline designed to systematically investigate the political orientation of\nLLMs. Our investigation delves into the political alignment of LLMs across a\nspectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.\nAcross topics, the results indicate that LLMs exhibit a tendency to provide\nresponses that closely align with liberal or left-leaning perspectives rather\nthan conservative or right-leaning ones when user queries include details\npertaining to occupation, race, or political affiliation. The findings\npresented in this study not only reaffirm earlier observations regarding the\nleft-leaning characteristics of LLMs but also surface particular attributes,\nsuch as occupation, that are particularly susceptible to such inclinations even\nwhen directly steered towards conservatism. As a recommendation to avoid these\nmodels providing politicised responses, users should be mindful when crafting\nqueries, and exercise caution in selecting neutral prompt language.", + "authors": "Pagnarasmey Pit, Xingjun Ma, Mike Conway, Qingyu Chen, James Bailey, Henry Pit, Putrasmey Keo, Watey Diep, Yu-Gang Jiang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.SI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.14607v2", + "title": "Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications", + "abstract": "Recent literature has suggested the potential of using large language models\n(LLMs) to make classifications for tabular tasks. However, LLMs have been shown\nto exhibit harmful social biases that reflect the stereotypes and inequalities\npresent in society. To this end, as well as the widespread use of tabular data\nin many high-stake applications, it is important to explore the following\nquestions: what sources of information do LLMs draw upon when making\nclassifications for tabular tasks; whether and to what extent are LLM\nclassifications for tabular data influenced by social biases and stereotypes;\nand what are the consequential implications for fairness?\n Through a series of experiments, we delve into these questions and show that\nLLMs tend to inherit social biases from their training data which significantly\nimpact their fairness in tabular classification tasks. Furthermore, our\ninvestigations show that in the context of bias mitigation, though in-context\nlearning and finetuning have a moderate effect, the fairness metric gap between\ndifferent subgroups is still larger than that in traditional machine learning\nmodels, such as Random Forest and shallow Neural Networks. This observation\nemphasizes that the social biases are inherent within the LLMs themselves and\ninherited from their pretraining corpus, not only from the downstream task\ndatasets. Besides, we demonstrate that label-flipping of in-context examples\ncan significantly reduce biases, further highlighting the presence of inherent\nbias within LLMs.", + "authors": "Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju", + "published": "2023-10-23", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.05694v1", + "title": "A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics", + "abstract": "The utilization of large language models (LLMs) in the Healthcare domain has\ngenerated both excitement and concern due to their ability to effectively\nrespond to freetext queries with certain professional knowledge. This survey\noutlines the capabilities of the currently developed LLMs for Healthcare and\nexplicates their development process, with the aim of providing an overview of\nthe development roadmap from traditional Pretrained Language Models (PLMs) to\nLLMs. Specifically, we first explore the potential of LLMs to enhance the\nefficiency and effectiveness of various Healthcare applications highlighting\nboth the strengths and limitations. Secondly, we conduct a comparison between\nthe previous PLMs and the latest LLMs, as well as comparing various LLMs with\neach other. Then we summarize related Healthcare training data, training\nmethods, optimization strategies, and usage. Finally, the unique concerns\nassociated with deploying LLMs in Healthcare settings are investigated,\nparticularly regarding fairness, accountability, transparency and ethics. Our\nsurvey provide a comprehensive investigation from perspectives of both computer\nscience and Healthcare specialty. Besides the discussion about Healthcare\nconcerns, we supports the computer science community by compiling a collection\nof open source resources, such as accessible datasets, the latest\nmethodologies, code implementations, and evaluation benchmarks in the Github.\nSummarily, we contend that a significant paradigm shift is underway,\ntransitioning from PLMs to LLMs. This shift encompasses a move from\ndiscriminative AI approaches to generative AI approaches, as well as a shift\nfrom model-centered methodologies to datacentered methodologies.", + "authors": "Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18580v1", + "title": "FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity", + "abstract": "The widespread of generative artificial intelligence has heightened concerns\nabout the potential harms posed by AI-generated texts, primarily stemming from\nfactoid, unfair, and toxic content. Previous researchers have invested much\neffort in assessing the harmlessness of generative language models. However,\nexisting benchmarks are struggling in the era of large language models (LLMs),\ndue to the stronger language generation and instruction following capabilities,\nas well as wider applications. In this paper, we propose FFT, a new benchmark\nwith 2116 elaborated-designed instances, for LLM harmlessness evaluation with\nfactuality, fairness, and toxicity. To investigate the potential harms of LLMs,\nwe evaluate 9 representative LLMs covering various parameter scales, training\nstages, and creators. Experiments show that the harmlessness of LLMs is still\nunder-satisfactory, and extensive analysis derives some insightful findings\nthat could inspire future research for harmless LLM research.", + "authors": "Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu", + "published": "2023-11-30", + "updated": "2023-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.01262v2", + "title": "Fairness Certification for Natural Language Processing and Large Language Models", + "abstract": "Natural Language Processing (NLP) plays an important role in our daily lives,\nparticularly due to the enormous progress of Large Language Models (LLM).\nHowever, NLP has many fairness-critical use cases, e.g., as an expert system in\nrecruitment or as an LLM-based tutor in education. Since NLP is based on human\nlanguage, potentially harmful biases can diffuse into NLP systems and produce\nunfair results, discriminate against minorities or generate legal issues.\nHence, it is important to develop a fairness certification for NLP approaches.\nWe follow a qualitative research approach towards a fairness certification for\nNLP. In particular, we have reviewed a large body of literature on algorithmic\nfairness, and we have conducted semi-structured expert interviews with a wide\nrange of experts from that area. We have systematically devised six fairness\ncriteria for NLP, which can be further refined into 18 sub-categories. Our\ncriteria offer a foundation for operationalizing and testing processes to\ncertify fairness, both from the perspective of the auditor and the audited\norganization.", + "authors": "Vincent Freiberger, Erik Buchmann", + "published": "2024-01-02", + "updated": "2024-01-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "68T50", + "I.2.7" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.15585v1", + "title": "Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting", + "abstract": "There exist both scalable tasks, like reading comprehension and\nfact-checking, where model performance improves with model size, and unscalable\ntasks, like arithmetic reasoning and symbolic reasoning, where model\nperformance does not necessarily improve with model size. Large language models\n(LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate\nincremental predictions even on unscalable tasks. Unfortunately, despite their\nexceptional reasoning abilities, LLMs tend to internalize and reproduce\ndiscriminatory societal biases. Whether CoT can provide discriminatory or\negalitarian rationalizations for the implicit information in unscalable tasks\nremains an open question.\n In this study, we examine the impact of LLMs' step-by-step predictions on\ngender bias in unscalable tasks. For this purpose, we construct a benchmark for\nan unscalable task where the LLM is given a list of words comprising feminine,\nmasculine, and gendered occupational words, and is required to count the number\nof feminine and masculine words. In our CoT prompts, we require the LLM to\nexplicitly indicate whether each word in the word list is a feminine or\nmasculine before making the final predictions. With counting and handling the\nmeaning of words, this benchmark has characteristics of both arithmetic\nreasoning and symbolic reasoning. Experimental results in English show that\nwithout step-by-step prediction, most LLMs make socially biased predictions,\ndespite the task being as simple as counting words. Interestingly, CoT\nprompting reduces this unconscious social bias in LLMs and encourages fair\npredictions.", + "authors": "Masahiro Kaneko, Danushka Bollegala, Naoaki Okazaki, Timothy Baldwin", + "published": "2024-01-28", + "updated": "2024-01-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.00306v1", + "title": "Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation", + "abstract": "Large Language Models (LLMs) can generate biased and toxic responses. Yet\nmost prior work on LLM gender bias evaluation requires predefined\ngender-related phrases or gender stereotypes, which are challenging to be\ncomprehensively collected and are limited to explicit bias evaluation. In\naddition, we believe that instances devoid of gender-related language or\nexplicit stereotypes in inputs can still induce gender bias in LLMs. Thus, in\nthis work, we propose a conditional text generation mechanism without the need\nfor predefined gender phrases and stereotypes. This approach employs three\ntypes of inputs generated through three distinct strategies to probe LLMs,\naiming to show evidence of explicit and implicit gender biases in LLMs. We also\nutilize explicit and implicit evaluation metrics to evaluate gender bias in\nLLMs under different strategies. Our experiments demonstrate that an increased\nmodel size does not consistently lead to enhanced fairness and all tested LLMs\nexhibit explicit and/or implicit gender bias, even when explicit gender\nstereotypes are absent in the inputs.", + "authors": "Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee", + "published": "2023-11-01", + "updated": "2023-11-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.07609v3", + "title": "Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation", + "abstract": "The remarkable achievements of Large Language Models (LLMs) have led to the\nemergence of a novel recommendation paradigm -- Recommendation via LLM\n(RecLLM). Nevertheless, it is important to note that LLMs may contain social\nprejudices, and therefore, the fairness of recommendations made by RecLLM\nrequires further investigation. To avoid the potential risks of RecLLM, it is\nimperative to evaluate the fairness of RecLLM with respect to various sensitive\nattributes on the user side. Due to the differences between the RecLLM paradigm\nand the traditional recommendation paradigm, it is problematic to directly use\nthe fairness benchmark of traditional recommendation. To address the dilemma,\nwe propose a novel benchmark called Fairness of Recommendation via LLM\n(FaiRLLM). This benchmark comprises carefully crafted metrics and a dataset\nthat accounts for eight sensitive attributes1 in two recommendation scenarios:\nmusic and movies. By utilizing our FaiRLLM benchmark, we conducted an\nevaluation of ChatGPT and discovered that it still exhibits unfairness to some\nsensitive attributes when generating recommendations. Our code and dataset can\nbe found at https://github.com/jizhi-zhang/FaiRLLM.", + "authors": "Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-05-12", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2309.09397v1", + "title": "Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Sentence Embeddings", + "abstract": "As Large Language Models are deployed within Artificial Intelligence systems,\nthat are increasingly integrated with human society, it becomes more important\nthan ever to study their internal structures. Higher level abilities of LLMs\nsuch as GPT-3.5 emerge in large part due to informative language\nrepresentations they induce from raw text data during pre-training on trillions\nof words. These embeddings exist in vector spaces of several thousand\ndimensions, and their processing involves mapping between multiple vector\nspaces, with total number of parameters on the order of trillions. Furthermore,\nthese language representations are induced by gradient optimization, resulting\nin a black box system that is hard to interpret. In this paper, we take a look\nat the topological structure of neuronal activity in the \"brain\" of Chat-GPT's\nfoundation language model, and analyze it with respect to a metric representing\nthe notion of fairness. We develop a novel approach to visualize GPT's moral\ndimensions. We first compute a fairness metric, inspired by social psychology\nliterature, to identify factors that typically influence fairness assessments\nin humans, such as legitimacy, need, and responsibility. Subsequently, we\nsummarize the manifold's shape using a lower-dimensional simplicial complex,\nwhose topology is derived from this metric. We color it with a heat map\nassociated with this fairness metric, producing human-readable visualizations\nof the high-dimensional sentence manifold. Our results show that sentence\nembeddings based on GPT-3.5 can be decomposed into two submanifolds\ncorresponding to fair and unfair moral judgments. This indicates that GPT-based\nlanguage models develop a moral dimension within their representation spaces\nand induce an understanding of fairness during their training process.", + "authors": "Stephen Fitz", + "published": "2023-09-17", + "updated": "2023-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG", + "cs.NE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.02219v1", + "title": "FairEvalLLM. A Comprehensive Framework for Benchmarking Fairness in Large Language Model Recommender Systems", + "abstract": "This paper presents a framework for evaluating fairness in recommender\nsystems powered by Large Language Models (RecLLMs), addressing the need for a\nunified approach that spans various fairness dimensions including sensitivity\nto user attributes, intrinsic fairness, and discussions of fairness based on\nunderlying benefits. In addition, our framework introduces counterfactual\nevaluations and integrates diverse user group considerations to enhance the\ndiscourse on fairness evaluation for RecLLMs.\n Our key contributions include the development of a robust framework for\nfairness evaluation in LLM-based recommendations and a structured method to\ncreate \\textit{informative user profiles} from demographic data, historical\nuser preferences, and recent interactions. We argue that the latter is\nessential for enhancing personalization in such systems, especially in\ntemporal-driven scenarios. We demonstrate the utility of our framework through\npractical applications on two datasets, LastFM-1K and ML-1M. We conduct\nexperiments on a subsample of 80 users from each dataset, testing and assessing\nthe effectiveness of various prompt construction scenarios and in-context\nlearning, comprising more than 50 scenarios. This results in more than 4000\nrecommendations (80 * 50 = 4000). Our study reveals that while there are no\nsignificant unfairness issues in scenarios involving sensitive attributes, some\nconcerns remain. However, in terms of intrinsic fairness, which does not\ninvolve direct sensitivity, unfairness across demographic groups remains\nsignificant. The code and data used for this paper are available at:\n\\url{https://shorturl.at/awBFM}.", + "authors": "Yashar Deldjoo", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.04814v2", + "title": "Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks", + "abstract": "We introduce Syntax-Aware Fill-In-the-Middle (SAFIM), a new benchmark for\nevaluating Large Language Models (LLMs) on the code Fill-in-the-Middle (FIM)\ntask. This benchmark focuses on syntax-aware completions of program structures\nsuch as code blocks and conditional expressions, and includes 17,720 examples\nfrom multiple programming languages, sourced from recent code submissions after\nApril 2022 to minimize data contamination. SAFIM provides a robust framework\nwith various prompt designs and novel syntax-aware post-processing techniques,\nfacilitating accurate and fair comparisons across LLMs. Our comprehensive\nevaluation of 15 LLMs shows that FIM pretraining not only enhances FIM\nproficiency but also improves Left-to-Right (L2R) inference using LLMs. Our\nfindings challenge conventional beliefs and suggest that pretraining methods\nand data quality have more impact than model size. SAFIM thus serves as a\nfoundational platform for future research in effective pretraining strategies\nfor code LLMs. The evaluation toolkit and dataset are available at\nhttps://github.com/gonglinyuan/safim, and the leaderboard is available at\nhttps://safimbenchmark.com.", + "authors": "Linyuan Gong, Sida Wang, Mostafa Elhoushi, Alvin Cheung", + "published": "2024-03-07", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.09606v1", + "title": "Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey", + "abstract": "Causal inference has shown potential in enhancing the predictive accuracy,\nfairness, robustness, and explainability of Natural Language Processing (NLP)\nmodels by capturing causal relationships among variables. The emergence of\ngenerative Large Language Models (LLMs) has significantly impacted various NLP\ndomains, particularly through their advanced reasoning capabilities. This\nsurvey focuses on evaluating and improving LLMs from a causal view in the\nfollowing areas: understanding and improving the LLMs' reasoning capacity,\naddressing fairness and safety issues in LLMs, complementing LLMs with\nexplanations, and handling multimodality. Meanwhile, LLMs' strong reasoning\ncapacities can in turn contribute to the field of causal inference by aiding\ncausal relationship discovery and causal effect estimations. This review\nexplores the interplay between causal inference frameworks and LLMs from both\nperspectives, emphasizing their collective potential to further the development\nof more advanced and equitable artificial intelligence systems.", + "authors": "Xiaoyu Liu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.08189v1", + "title": "Simulating Human Strategic Behavior: Comparing Single and Multi-agent LLMs", + "abstract": "When creating plans, policies, or applications for people, it is challenging\nfor designers to think through the strategic ways that different people will\nbehave. Recently, Large Language Models (LLMs) have been shown to create\nrealistic simulations of human-like behavior based on personas. We build on\nthis to investigate whether LLMs can simulate human strategic behavior. Human\nstrategies are complex because they take into account social norms in addition\nto aiming to maximize personal gain. The ultimatum game is a classic economics\nexperiment used to understand human strategic behavior in a social setting. It\nshows that people will often choose to \"punish\" other players to enforce social\nnorms rather than to maximize personal profits. We test whether LLMs can\nreplicate this complex behavior in simulations. We compare two architectures:\nsingle- and multi-agent LLMs. We compare their abilities to (1) simulate\nhuman-like actions in the ultimatum game, (2) simulate two player\npersonalities, greedy and fair, and (3) create robust strategies that are\nlogically complete and consistent with personality. Our evaluation shows the\nmulti-agent architecture is much more accurate than single LLMs (88% vs. 50%)\nin simulating human strategy creation and actions for personality pairs. Thus\nthere is potential to use LLMs to simulate human strategic behavior to help\ndesigners, planners, and policymakers perform preliminary exploration of how\npeople behave in systems.", + "authors": "Karthik Sreedhar, Lydia Chilton", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.17916v2", + "title": "LLM-Resistant Math Word Problem Generation via Adversarial Attacks", + "abstract": "Large language models (LLMs) have significantly transformed the educational\nlandscape. As current plagiarism detection tools struggle to keep pace with\nLLMs' rapid advancements, the educational community faces the challenge of\nassessing students' true problem-solving abilities in the presence of LLMs. In\nthis work, we explore a new paradigm for ensuring fair evaluation -- generating\nadversarial examples which preserve the structure and difficulty of the\noriginal questions aimed for assessment, but are unsolvable by LLMs. Focusing\non the domain of math word problems, we leverage abstract syntax trees to\nstructurally generate adversarial examples that cause LLMs to produce incorrect\nanswers by simply editing the numeric values in the problems. We conduct\nexperiments on various open- and closed-source LLMs, quantitatively and\nqualitatively demonstrating that our method significantly degrades their math\nproblem-solving ability. We identify shared vulnerabilities among LLMs and\npropose a cost-effective approach to attack high-cost models. Additionally, we\nconduct automatic analysis on math problems and investigate the cause of\nfailure, offering a nuanced view into model's limitation.", + "authors": "Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra", + "published": "2024-02-27", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2405.01769v1", + "title": "A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law", + "abstract": "In the fast-evolving domain of artificial intelligence, large language models\n(LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance,\nhealthcare, and law: domains characterized by their reliance on professional\nexpertise, challenging data acquisition, high-stakes, and stringent regulatory\ncompliance. This survey offers a detailed exploration of the methodologies,\napplications, challenges, and forward-looking opportunities of LLMs within\nthese high-stakes sectors. We highlight the instrumental role of LLMs in\nenhancing diagnostic and treatment methodologies in healthcare, innovating\nfinancial analytics, and refining legal interpretation and compliance\nstrategies. Moreover, we critically examine the ethics for LLM applications in\nthese fields, pointing out the existing ethical concerns and the need for\ntransparent, fair, and robust AI systems that respect regulatory norms. By\npresenting a thorough review of current literature and practical applications,\nwe showcase the transformative impact of LLMs, and outline the imperative for\ninterdisciplinary cooperation, methodological advancements, and ethical\nvigilance. Through this lens, we aim to spark dialogue and inspire future\nresearch dedicated to maximizing the benefits of LLMs while mitigating their\nrisks in these precision-dependent sectors. To facilitate future research on\nLLMs in these critical societal domains, we also initiate a reading list that\ntracks the latest advancements under this topic, which will be continually\nupdated: \\url{https://github.com/czyssrs/LLM_X_papers}.", + "authors": "Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.02680v1", + "title": "Large Language Models are Geographically Biased", + "abstract": "Large Language Models (LLMs) inherently carry the biases contained in their\ntraining corpora, which can lead to the perpetuation of societal harm. As the\nimpact of these foundation models grows, understanding and evaluating their\nbiases becomes crucial to achieving fairness and accuracy. We propose to study\nwhat LLMs know about the world we live in through the lens of geography. This\napproach is particularly powerful as there is ground truth for the numerous\naspects of human life that are meaningfully projected onto geographic space\nsuch as culture, race, language, politics, and religion. We show various\nproblematic geographic biases, which we define as systemic errors in geospatial\npredictions. Initially, we demonstrate that LLMs are capable of making accurate\nzero-shot geospatial predictions in the form of ratings that show strong\nmonotonic correlation with ground truth (Spearman's $\\rho$ of up to 0.89). We\nthen show that LLMs exhibit common biases across a range of objective and\nsubjective topics. In particular, LLMs are clearly biased against locations\nwith lower socioeconomic conditions (e.g. most of Africa) on a variety of\nsensitive subjective topics such as attractiveness, morality, and intelligence\n(Spearman's $\\rho$ of up to 0.70). Finally, we introduce a bias score to\nquantify this and find that there is significant variation in the magnitude of\nbias across existing LLMs.", + "authors": "Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.06899v4", + "title": "Flames: Benchmarking Value Alignment of LLMs in Chinese", + "abstract": "The widespread adoption of large language models (LLMs) across various\nregions underscores the urgent need to evaluate their alignment with human\nvalues. Current benchmarks, however, fall short of effectively uncovering\nsafety vulnerabilities in LLMs. Despite numerous models achieving high scores\nand 'topping the chart' in these evaluations, there is still a significant gap\nin LLMs' deeper alignment with human values and achieving genuine harmlessness.\nTo this end, this paper proposes a value alignment benchmark named Flames,\nwhich encompasses both common harmlessness principles and a unique morality\ndimension that integrates specific Chinese values such as harmony. Accordingly,\nwe carefully design adversarial prompts that incorporate complex scenarios and\njailbreaking methods, mostly with implicit malice. By prompting 17 mainstream\nLLMs, we obtain model responses and rigorously annotate them for detailed\nevaluation. Our findings indicate that all the evaluated LLMs demonstrate\nrelatively poor performance on Flames, particularly in the safety and fairness\ndimensions. We also develop a lightweight specified scorer capable of scoring\nLLMs across multiple dimensions to efficiently evaluate new models on the\nbenchmark. The complexity of Flames has far exceeded existing benchmarks,\nsetting a new challenge for contemporary LLMs and highlighting the need for\nfurther alignment of LLMs. Our benchmark is publicly available at\nhttps://github.com/AIFlames/Flames.", + "authors": "Kexin Huang, Xiangyang Liu, Qianyu Guo, Tianxiang Sun, Jiawei Sun, Yaru Wang, Zeyang Zhou, Yixu Wang, Yan Teng, Xipeng Qiu, Yingchun Wang, Dahua Lin", + "published": "2023-11-12", + "updated": "2024-04-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.09447v2", + "title": "How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", + "abstract": "The rapid progress in open-source Large Language Models (LLMs) is\nsignificantly driving AI development forward. However, there is still a limited\nunderstanding of their trustworthiness. Deploying these models at scale without\nsufficient trustworthiness can pose significant risks, highlighting the need to\nuncover these issues promptly. In this work, we conduct an adversarial\nassessment of open-source LLMs on trustworthiness, scrutinizing them across\neight different aspects including toxicity, stereotypes, ethics, hallucination,\nfairness, sycophancy, privacy, and robustness against adversarial\ndemonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU)\nprompting strategy by incorporating carefully crafted malicious demonstrations\nfor trustworthiness attack. Our extensive experiments encompass recent and\nrepresentative series of open-source LLMs, including Vicuna, MPT, Falcon,\nMistral, and Llama 2. The empirical outcomes underscore the efficacy of our\nattack strategy across diverse aspects. More interestingly, our result analysis\nreveals that models with superior performance in general NLP tasks do not\nalways have greater trustworthiness; in fact, larger models can be more\nvulnerable to attacks. Additionally, models that have undergone instruction\ntuning, focusing on instruction following, tend to be more susceptible,\nalthough fine-tuning LLMs for safety alignment proves effective in mitigating\nadversarial trustworthiness attacks.", + "authors": "Lingbo Mo, Boshi Wang, Muhao Chen, Huan Sun", + "published": "2023-11-15", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.05374v2", + "title": "Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment", + "abstract": "Ensuring alignment, which refers to making models behave in accordance with\nhuman intentions [1,2], has become a critical task before deploying large\nlanguage models (LLMs) in real-world applications. For instance, OpenAI devoted\nsix months to iteratively aligning GPT-4 before its release [3]. However, a\nmajor challenge faced by practitioners is the lack of clear guidance on\nevaluating whether LLM outputs align with social norms, values, and\nregulations. This obstacle hinders systematic iteration and deployment of LLMs.\nTo address this issue, this paper presents a comprehensive survey of key\ndimensions that are crucial to consider when assessing LLM trustworthiness. The\nsurvey covers seven major categories of LLM trustworthiness: reliability,\nsafety, fairness, resistance to misuse, explainability and reasoning, adherence\nto social norms, and robustness. Each major category is further divided into\nseveral sub-categories, resulting in a total of 29 sub-categories.\nAdditionally, a subset of 8 sub-categories is selected for further\ninvestigation, where corresponding measurement studies are designed and\nconducted on several widely-used LLMs. The measurement results indicate that,\nin general, more aligned models tend to perform better in terms of overall\ntrustworthiness. However, the effectiveness of alignment varies across the\ndifferent trustworthiness categories considered. This highlights the importance\nof conducting more fine-grained analyses, testing, and making continuous\nimprovements on LLM alignment. By shedding light on these key dimensions of LLM\ntrustworthiness, this paper aims to provide valuable insights and guidance to\npractitioners in the field. Understanding and addressing these concerns will be\ncrucial in achieving reliable and ethically sound deployment of LLMs in various\napplications.", + "authors": "Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, Hang Li", + "published": "2023-08-10", + "updated": "2024-03-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.03033v1", + "title": "Beyond Words: A Mathematical Framework for Interpreting Large Language Models", + "abstract": "Large language models (LLMs) are powerful AI tools that can generate and\ncomprehend natural language text and other complex information. However, the\nfield lacks a mathematical framework to systematically describe, compare and\nimprove LLMs. We propose Hex a framework that clarifies key terms and concepts\nin LLM research, such as hallucinations, alignment, self-verification and\nchain-of-thought reasoning. The Hex framework offers a precise and consistent\nway to characterize LLMs, identify their strengths and weaknesses, and\nintegrate new findings. Using Hex, we differentiate chain-of-thought reasoning\nfrom chain-of-thought prompting and establish the conditions under which they\nare equivalent. This distinction clarifies the basic assumptions behind\nchain-of-thought prompting and its implications for methods that use it, such\nas self-verification and prompt programming.\n Our goal is to provide a formal framework for LLMs that can help both\nresearchers and practitioners explore new possibilities for generative AI. We\ndo not claim to have a definitive solution, but rather a tool for opening up\nnew research avenues. We argue that our formal definitions and results are\ncrucial for advancing the discussion on how to build generative AI systems that\nare safe, reliable, fair and robust, especially in domains like healthcare and\nsoftware engineering.", + "authors": "Javier Gonz\u00e1lez, Aditya V. Nori", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.13925v1", + "title": "MARIO Eval: Evaluate Your Math LLM with your Math LLM--A mathematical dataset evaluation toolkit", + "abstract": "Large language models (LLMs) have been explored in a variety of reasoning\ntasks including solving of mathematical problems. Each math dataset typically\nincludes its own specially designed evaluation script, which, while suitable\nfor its intended use, lacks generalizability across different datasets.\nConsequently, updates and adaptations to these evaluation tools tend to occur\nwithout being systematically reported, leading to inconsistencies and obstacles\nto fair comparison across studies. To bridge this gap, we introduce a\ncomprehensive mathematical evaluation toolkit that not only utilizes a python\ncomputer algebra system (CAS) for its numerical accuracy, but also integrates\nan optional LLM, known for its considerable natural language processing\ncapabilities. To validate the effectiveness of our toolkit, we manually\nannotated two distinct datasets. Our experiments demonstrate that the toolkit\nyields more robust evaluation results compared to prior works, even without an\nLLM. Furthermore, when an LLM is incorporated, there is a notable enhancement.\nThe code for our method will be made available at\n\\url{https://github.com/MARIO-Math-Reasoning/math_evaluation}.", + "authors": "Boning Zhang, Chengxi Li, Kai Fan", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.14208v2", + "title": "Content Conditional Debiasing for Fair Text Embedding", + "abstract": "Mitigating biases in machine learning models has gained increasing attention\nin Natural Language Processing (NLP). Yet, only a few studies focus on fair\ntext embeddings, which are crucial yet challenging for real-world applications.\nIn this paper, we propose a novel method for learning fair text embeddings. We\nachieve fairness while maintaining utility trade-off by ensuring conditional\nindependence between sensitive attributes and text embeddings conditioned on\nthe content. Specifically, we enforce that embeddings of texts with different\nsensitive attributes but identical content maintain the same distance toward\nthe embedding of their corresponding neutral text. Furthermore, we address the\nissue of lacking proper training data by using Large Language Models (LLMs) to\naugment texts into different sensitive groups. Our extensive evaluations\ndemonstrate that our approach effectively improves fairness while preserving\nthe utility of embeddings, representing a pioneering effort in achieving\nconditional independence for fair text embeddings.", + "authors": "Wenlong Deng, Blair Chen, Xiaoxiao Li, Christos Thrampoulidis", + "published": "2024-02-22", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15398v1", + "title": "Fairness-Aware Structured Pruning in Transformers", + "abstract": "The increasing size of large language models (LLMs) has introduced challenges\nin their training and inference. Removing model components is perceived as a\nsolution to tackle the large model sizes, however, existing pruning methods\nsolely focus on performance, without considering an essential aspect for the\nresponsible use of LLMs: model fairness. It is crucial to address the fairness\nof LLMs towards diverse groups, such as women, Black people, LGBTQ+, Jewish\ncommunities, among others, as they are being deployed and available to a wide\naudience. In this work, first, we investigate how attention heads impact\nfairness and performance in pre-trained transformer-based language models. We\nthen propose a novel method to prune the attention heads that negatively impact\nfairness while retaining the heads critical for performance, i.e. language\nmodeling capabilities. Our approach is practical in terms of time and\nresources, as it does not require fine-tuning the final pruned, and fairer,\nmodel. Our findings demonstrate a reduction in gender bias by 19%, 19.5%,\n39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo of two different\nsizes, GPT-J, and Llama 2 models, respectively, in comparison to the biased\nmodel, with only a slight decrease in performance.", + "authors": "Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Ioana Baldini, Sarath Chandar", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18130v2", + "title": "DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues", + "abstract": "Controversy is a reflection of our zeitgeist, and an important aspect to any\ndiscourse. The rise of large language models (LLMs) as conversational systems\nhas increased public reliance on these systems for answers to their various\nquestions. Consequently, it is crucial to systematically examine how these\nmodels respond to questions that pertaining to ongoing debates. However, few\nsuch datasets exist in providing human-annotated labels reflecting the\ncontemporary discussions. To foster research in this area, we propose a novel\nconstruction of a controversial questions dataset, expanding upon the publicly\nreleased Quora Question Pairs Dataset. This dataset presents challenges\nconcerning knowledge recency, safety, fairness, and bias. We evaluate different\nLLMs using a subset of this dataset, illuminating how they handle controversial\nissues and the stances they adopt. This research ultimately contributes to our\nunderstanding of LLMs' interaction with controversial issues, paving the way\nfor improvements in their comprehension and handling of complex societal\ndebates.", + "authors": "David Q. Sun, Artem Abzaliev, Hadas Kotek, Zidi Xiu, Christopher Klein, Jason D. Williams", + "published": "2023-10-27", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.HC" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.18569v1", + "title": "Fairness of ChatGPT", + "abstract": "Understanding and addressing unfairness in LLMs are crucial for responsible\nAI deployment. However, there is a limited availability of quantitative\nanalyses and in-depth studies regarding fairness evaluations in LLMs,\nespecially when applying LLMs to high-stakes fields. This work aims to fill\nthis gap by providing a systematic evaluation of the effectiveness and fairness\nof LLMs using ChatGPT as a study case. We focus on assessing ChatGPT's\nperformance in high-takes fields including education, criminology, finance and\nhealthcare. To make thorough evaluation, we consider both group fairness and\nindividual fairness and we also observe the disparities in ChatGPT's outputs\nunder a set of biased or unbiased prompts. This work contributes to a deeper\nunderstanding of LLMs' fairness performance, facilitates bias mitigation and\nfosters the development of responsible artificial intelligence systems.", + "authors": "Yunqi Li, Yongfeng Zhang", + "published": "2023-05-22", + "updated": "2023-05-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.07688v1", + "title": "CyberMetric: A Benchmark Dataset for Evaluating Large Language Models Knowledge in Cybersecurity", + "abstract": "Large Language Models (LLMs) excel across various domains, from computer\nvision to medical diagnostics. However, understanding the diverse landscape of\ncybersecurity, encompassing cryptography, reverse engineering, and managerial\nfacets like risk assessment, presents a challenge, even for human experts. In\nthis paper, we introduce CyberMetric, a benchmark dataset comprising 10,000\nquestions sourced from standards, certifications, research papers, books, and\nother publications in the cybersecurity domain. The questions are created\nthrough a collaborative process, i.e., merging expert knowledge with LLMs,\nincluding GPT-3.5 and Falcon-180B. Human experts spent over 200 hours verifying\ntheir accuracy and relevance. Beyond assessing LLMs' knowledge, the dataset's\nmain goal is to facilitate a fair comparison between humans and different LLMs\nin cybersecurity. To achieve this, we carefully selected 80 questions covering\na wide range of topics within cybersecurity and involved 30 participants of\ndiverse expertise levels, facilitating a comprehensive comparison between human\nand machine intelligence in this area. The findings revealed that LLMs\noutperformed humans in almost every aspect of cybersecurity.", + "authors": "Norbert Tihanyi, Mohamed Amine Ferrag, Ridhi Jain, Merouane Debbah", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.13095v1", + "title": "Enhancing Logical Reasoning in Large Language Models to Facilitate Legal Applications", + "abstract": "Language serves as a vehicle for conveying thought, enabling communication\namong individuals. The ability to distinguish between diverse concepts,\nidentify fairness and injustice, and comprehend a range of legal notions\nfundamentally relies on logical reasoning. Large Language Models (LLMs) attempt\nto emulate human language understanding and generation, but their competency in\nlogical reasoning remains limited. This paper seeks to address the\nphilosophical question: How can we effectively teach logical reasoning to LLMs\nwhile maintaining a deep understanding of the intricate relationship between\nlanguage and logic? By focusing on bolstering LLMs' capabilities in logical\nreasoning, we aim to expand their applicability in law and other\nlogic-intensive disciplines. To this end, we propose a Reinforcement Learning\nfrom Logical Feedback (RLLF) approach, which serves as a potential framework\nfor refining LLMs' reasoning capacities. Through RLLF and a revised evaluation\nmethodology, we explore new avenues for research in this domain and contribute\nto the development of LLMs capable of handling complex legal reasoning tasks\nwhile acknowledging the fundamental connection between language and logic.", + "authors": "Ha-Thanh Nguyen, Wachara Fungwacharakorn, Ken Satoh", + "published": "2023-11-22", + "updated": "2023-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.03192v1", + "title": "Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers", + "abstract": "The integration of Large Language Models (LLMs) in information retrieval has\nraised a critical reevaluation of fairness in the text-ranking models. LLMs,\nsuch as GPT models and Llama2, have shown effectiveness in natural language\nunderstanding tasks, and prior works (e.g., RankGPT) have also demonstrated\nthat the LLMs exhibit better performance than the traditional ranking models in\nthe ranking task. However, their fairness remains largely unexplored. This\npaper presents an empirical study evaluating these LLMs using the TREC Fair\nRanking dataset, focusing on the representation of binary protected attributes\nsuch as gender and geographic location, which are historically underrepresented\nin search outcomes. Our analysis delves into how these LLMs handle queries and\ndocuments related to these attributes, aiming to uncover biases in their\nranking algorithms. We assess fairness from both user and content perspectives,\ncontributing an empirical benchmark for evaluating LLMs as the fair ranker.", + "authors": "Yuan Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.08656v1", + "title": "Linear Cross-document Event Coreference Resolution with X-AMR", + "abstract": "Event Coreference Resolution (ECR) as a pairwise mention classification task\nis expensive both for automated systems and manual annotations. The task's\nquadratic difficulty is exacerbated when using Large Language Models (LLMs),\nmaking prompt engineering for ECR prohibitively costly. In this work, we\npropose a graphical representation of events, X-AMR, anchored around individual\nmentions using a \\textbf{cross}-document version of \\textbf{A}bstract\n\\textbf{M}eaning \\textbf{R}epresentation. We then linearize the ECR with a\nnovel multi-hop coreference algorithm over the event graphs. The event graphs\nsimplify ECR, making it a) LLM cost-effective, b) compositional and\ninterpretable, and c) easily annotated. For a fair assessment, we first enrich\nan existing ECR benchmark dataset with these event graphs using an\nannotator-friendly tool we introduce. Then, we employ GPT-4, the newest LLM by\nOpenAI, for these annotations. Finally, using the ECR algorithm, we assess\nGPT-4 against humans and analyze its limitations. Through this research, we aim\nto advance the state-of-the-art for efficient ECR and shed light on the\npotential shortcomings of current LLMs at this task. Code and annotations:\n\\url{https://github.com/ahmeshaf/gpt_coref}", + "authors": "Shafiuddin Rehan Ahmed, George Arthur Baker, Evi Judge, Michael Regan, Kristin Wright-Bettner, Martha Palmer, James H. Martin", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2305.03514v3", + "title": "Can Large Language Models Transform Computational Social Science?", + "abstract": "Large Language Models (LLMs) are capable of successfully performing many\nlanguage processing tasks zero-shot (without training data). If zero-shot LLMs\ncan also reliably classify and explain social phenomena like persuasiveness and\npolitical ideology, then LLMs could augment the Computational Social Science\n(CSS) pipeline in important ways. This work provides a road map for using LLMs\nas CSS tools. Towards this end, we contribute a set of prompting best practices\nand an extensive evaluation pipeline to measure the zero-shot performance of 13\nlanguage models on 25 representative English CSS benchmarks. On taxonomic\nlabeling tasks (classification), LLMs fail to outperform the best fine-tuned\nmodels but still achieve fair levels of agreement with humans. On free-form\ncoding tasks (generation), LLMs produce explanations that often exceed the\nquality of crowdworkers' gold references. We conclude that the performance of\ntoday's LLMs can augment the CSS research pipeline in two ways: (1) serving as\nzero-shot data annotators on human annotation teams, and (2) bootstrapping\nchallenging creative generation tasks (e.g., explaining the underlying\nattributes of a text). In summary, LLMs are posed to meaningfully participate\nin social science analysis in partnership with humans.", + "authors": "Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang", + "published": "2023-04-12", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.14473v1", + "title": "The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs)", + "abstract": "With the introduction of ChatGPT, Large Language Models (LLMs) have received\nenormous attention in healthcare. Despite their potential benefits, researchers\nhave underscored various ethical implications. While individual instances have\ndrawn much attention, the debate lacks a systematic overview of practical\napplications currently researched and ethical issues connected to them. Against\nthis background, this work aims to map the ethical landscape surrounding the\ncurrent stage of deployment of LLMs in medicine and healthcare. Electronic\ndatabases and preprint servers were queried using a comprehensive search\nstrategy. Studies were screened and extracted following a modified rapid review\napproach. Methodological quality was assessed using a hybrid approach. For 53\nrecords, a meta-aggregative synthesis was performed. Four fields of\napplications emerged and testify to a vivid exploration phase. Advantages of\nusing LLMs are attributed to their capacity in data analysis, personalized\ninformation provisioning, support in decision-making, mitigating information\nloss and enhancing information accessibility. However, we also identifies\nrecurrent ethical concerns connected to fairness, bias, non-maleficence,\ntransparency, and privacy. A distinctive concern is the tendency to produce\nharmful misinformation or convincingly but inaccurate content. A recurrent plea\nfor ethical guidance and human oversight is evident. Given the variety of use\ncases, it is suggested that the ethical guidance debate be reframed to focus on\ndefining what constitutes acceptable human oversight across the spectrum of\napplications. This involves considering diverse settings, varying potentials\nfor harm, and different acceptable thresholds for performance and certainty in\nhealthcare. In addition, a critical inquiry is necessary to determine the\nextent to which the current experimental use of LLMs is necessary and\njustified.", + "authors": "Joschka Haltaufderheide, Robert Ranisch", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2307.11761v1", + "title": "Fairness of ChatGPT and the Role Of Explainable-Guided Prompts", + "abstract": "Our research investigates the potential of Large-scale Language Models\n(LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary\nclassification task. Our findings suggest that LLMs, when directed by\njudiciously designed prompts and supplemented with domain-specific knowledge,\ncan parallel the performance of traditional Machine Learning (ML) models.\nIntriguingly, they achieve this with significantly less data-40 times less,\nutilizing merely 20 data points compared to the ML's 800. LLMs particularly\nexcel in minimizing false positives and enhancing fairness, both being vital\naspects of risk analysis. While our results did not surpass those of classical\nML models, they underscore the potential of LLMs in analogous tasks, laying a\ngroundwork for future explorations into harnessing the capabilities of LLMs in\ndiverse ML tasks.", + "authors": "Yashar Deldjoo", + "published": "2023-07-14", + "updated": "2023-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.15491v1", + "title": "Open Source Conversational LLMs do not know most Spanish words", + "abstract": "The growing interest in Large Language Models (LLMs) and in particular in\nconversational models with which users can interact has led to the development\nof a large number of open-source chat LLMs. These models are evaluated on a\nwide range of benchmarks to assess their capabilities in answering questions or\nsolving problems on almost any possible topic or to test their ability to\nreason or interpret texts. Instead, the evaluation of the knowledge that these\nmodels have of the languages has received much less attention. For example, the\nwords that they can recognize and use in different languages. In this paper, we\nevaluate the knowledge that open-source chat LLMs have of Spanish words by\ntesting a sample of words in a reference dictionary. The results show that\nopen-source chat LLMs produce incorrect meanings for an important fraction of\nthe words and are not able to use most of the words correctly to write\nsentences with context. These results show how Spanish is left behind in the\nopen-source LLM race and highlight the need to push for linguistic fairness in\nconversational LLMs ensuring that they provide similar performance across\nlanguages.", + "authors": "Javier Conde, Miguel Gonz\u00e1lez, Nina Melero, Raquel Ferrando, Gonzalo Mart\u00ednez, Elena Merino-G\u00f3mez, Jos\u00e9 Alberto Hern\u00e1ndez, Pedro Reviriego", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2206.13757v1", + "title": "Flexible text generation for counterfactual fairness probing", + "abstract": "A common approach for testing fairness issues in text-based classifiers is\nthrough the use of counterfactuals: does the classifier output change if a\nsensitive attribute in the input is changed? Existing counterfactual generation\nmethods typically rely on wordlists or templates, producing simple\ncounterfactuals that don't take into account grammar, context, or subtle\nsensitive attribute references, and could miss issues that the wordlist\ncreators had not considered. In this paper, we introduce a task for generating\ncounterfactuals that overcomes these shortcomings, and demonstrate how large\nlanguage models (LLMs) can be leveraged to make progress on this task. We show\nthat this LLM-based method can produce complex counterfactuals that existing\nmethods cannot, comparing the performance of various counterfactual generation\nmethods on the Civil Comments dataset and showing their value in evaluating a\ntoxicity classifier.", + "authors": "Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster", + "published": "2022-06-28", + "updated": "2022-06-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.02294v1", + "title": "LLMs grasp morality in concept", + "abstract": "Work in AI ethics and fairness has made much progress in regulating LLMs to\nreflect certain values, such as fairness, truth, and diversity. However, it has\ntaken the problem of how LLMs might 'mean' anything at all for granted. Without\naddressing this, it is not clear what imbuing LLMs with such values even means.\nIn response, we provide a general theory of meaning that extends beyond humans.\nWe use this theory to explicate the precise nature of LLMs as meaning-agents.\nWe suggest that the LLM, by virtue of its position as a meaning-agent, already\ngrasps the constructions of human society (e.g. morality, gender, and race) in\nconcept. Consequently, under certain ethical frameworks, currently popular\nmethods for model alignment are limited at best and counterproductive at worst.\nMoreover, unaligned models may help us better develop our moral and social\nphilosophy.", + "authors": "Mark Pock, Andre Ye, Jared Moore", + "published": "2023-11-04", + "updated": "2023-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.01964v1", + "title": "Don't Make Your LLM an Evaluation Benchmark Cheater", + "abstract": "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attaining remarkable improvement in model capacity. To\nassess the model performance, a typical approach is to construct evaluation\nbenchmarks for measuring the ability level of LLMs in different aspects.\nDespite that a number of high-quality benchmarks have been released, the\nconcerns about the appropriate use of these benchmarks and the fair comparison\nof different models are increasingly growing. Considering these concerns, in\nthis paper, we discuss the potential risk and impact of inappropriately using\nevaluation benchmarks and misleadingly interpreting the evaluation results.\nSpecially, we focus on a special issue that would lead to inappropriate\nevaluation, \\ie \\emph{benchmark leakage}, referring that the data related to\nevaluation sets is occasionally used for model training. This phenomenon now\nbecomes more common since pre-training data is often prepared ahead of model\ntest. We conduct extensive experiments to study the effect of benchmark\nleverage, and find that it can dramatically boost the evaluation results, which\nwould finally lead to an unreliable assessment of model performance. To improve\nthe use of existing evaluation benchmarks, we finally present several\nguidelines for both LLM developers and benchmark maintainers. We hope this work\ncan draw attention to appropriate training and evaluation of LLMs.", + "authors": "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han", + "published": "2023-11-03", + "updated": "2023-11-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.08472v1", + "title": "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models", + "abstract": "Recently, work in NLP has shifted to few-shot (in-context) learning, with\nlarge language models (LLMs) performing well across a range of tasks. However,\nwhile fairness evaluations have become a standard for supervised methods,\nlittle is known about the fairness of LLMs as prediction systems. Further,\ncommon standard methods for fairness involve access to models weights or are\napplied during finetuning, which are not applicable in few-shot learning. Do\nLLMs exhibit prediction biases when used for standard NLP tasks? In this work,\nwe explore the effect of shots, which directly affect the performance of\nmodels, on the fairness of LLMs as NLP classification systems. We consider how\ndifferent shot selection strategies, both existing and new demographically\nsensitive methods, affect model fairness across three standard fairness\ndatasets. We discuss how future work can include LLM fairness evaluations.", + "authors": "Carlos Aguirre, Kuleen Sasse, Isabel Cachola, Mark Dredze", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.04892v2", + "title": "Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs", + "abstract": "Recent works have showcased the ability of LLMs to embody diverse personas in\ntheir responses, exemplified by prompts like 'You are Yoda. Explain the Theory\nof Relativity.' While this ability allows personalization of LLMs and enables\nhuman behavior simulation, its effect on LLMs' capabilities remains unclear. To\nfill this gap, we present the first extensive study of the unintended\nside-effects of persona assignment on the ability of LLMs to perform basic\nreasoning tasks. Our study covers 24 reasoning datasets, 4 LLMs, and 19 diverse\npersonas (e.g. an Asian person) spanning 5 socio-demographic groups. Our\nexperiments unveil that LLMs harbor deep rooted bias against various\nsocio-demographics underneath a veneer of fairness. While they overtly reject\nstereotypes when explicitly asked ('Are Black people less skilled at\nmathematics?'), they manifest stereotypical and erroneous presumptions when\nasked to answer questions while adopting a persona. These can be observed as\nabstentions in responses, e.g., 'As a Black person, I can't answer this\nquestion as it requires math knowledge', and generally result in a substantial\nperformance drop. Our experiments with ChatGPT-3.5 show that this bias is\nubiquitous - 80% of our personas demonstrate bias; it is significant - some\ndatasets show performance drops of 70%+; and can be especially harmful for\ncertain groups - some personas suffer statistically significant drops on 80%+\nof the datasets. Overall, all 4 LLMs exhibit this bias to varying extents, with\nGPT-4-Turbo showing the least but still a problematic amount of bias (evident\nin 42% of the personas). Further analysis shows that these persona-induced\nerrors can be hard-to-discern and hard-to-avoid. Our findings serve as a\ncautionary tale that the practice of assigning personas to LLMs - a trend on\nthe rise - can surface their deep-rooted biases and have unforeseeable and\ndetrimental side-effects.", + "authors": "Shashank Gupta, Vaishnavi Shrivastava, Ameet Deshpande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, Tushar Khot", + "published": "2023-11-08", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.15007v1", + "title": "Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models", + "abstract": "With large language models (LLMs) poised to become embedded in our daily\nlives, questions are starting to be raised about the dataset(s) they learned\nfrom. These questions range from potential bias or misinformation LLMs could\nretain from their training data to questions of copyright and fair use of\nhuman-generated text. However, while these questions emerge, developers of the\nrecent state-of-the-art LLMs become increasingly reluctant to disclose details\non their training corpus. We here introduce the task of document-level\nmembership inference for real-world LLMs, i.e. inferring whether the LLM has\nseen a given document during training or not. First, we propose a procedure for\nthe development and evaluation of document-level membership inference for LLMs\nby leveraging commonly used data sources for training and the model release\ndate. We then propose a practical, black-box method to predict document-level\nmembership and instantiate it on OpenLLaMA-7B with both books and academic\npapers. We show our methodology to perform very well, reaching an impressive\nAUC of 0.856 for books and 0.678 for papers. We then show our approach to\noutperform the sentence-level membership inference attacks used in the privacy\nliterature for the document-level membership task. We finally evaluate whether\nsmaller models might be less sensitive to document-level inference and show\nOpenLLaMA-3B to be approximately as sensitive as OpenLLaMA-7B to our approach.\nTaken together, our results show that accurate document-level membership can be\ninferred for LLMs, increasing the transparency of technology poised to change\nour lives.", + "authors": "Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye", + "published": "2023-10-23", + "updated": "2023-10-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CR", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.06500v1", + "title": "MetaAgents: Simulating Interactions of Human Behaviors for LLM-based Task-oriented Coordination via Collaborative Generative Agents", + "abstract": "Significant advancements have occurred in the application of Large Language\nModels (LLMs) for various tasks and social simulations. Despite this, their\ncapacities to coordinate within task-oriented social contexts are\nunder-explored. Such capabilities are crucial if LLMs are to effectively mimic\nhuman-like social behavior and produce meaningful results. To bridge this gap,\nwe introduce collaborative generative agents, endowing LLM-based Agents with\nconsistent behavior patterns and task-solving abilities. We situate these\nagents in a simulated job fair environment as a case study to scrutinize their\ncoordination skills. We propose a novel framework that equips collaborative\ngenerative agents with human-like reasoning abilities and specialized skills.\nOur evaluation demonstrates that these agents show promising performance.\nHowever, we also uncover limitations that hinder their effectiveness in more\ncomplex coordination tasks. Our work provides valuable insights into the role\nand evolution of LLMs in task-oriented social simulations.", + "authors": "Yuan Li, Yixuan Zhang, Lichao Sun", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.15478v1", + "title": "A Group Fairness Lens for Large Language Models", + "abstract": "The rapid advancement of large language models has revolutionized various\napplications but also raised crucial concerns about their potential to\nperpetuate biases and unfairness when deployed in social media contexts.\nEvaluating LLMs' potential biases and fairness has become crucial, as existing\nmethods rely on limited prompts focusing on just a few groups, lacking a\ncomprehensive categorical perspective. In this paper, we propose evaluating LLM\nbiases from a group fairness lens using a novel hierarchical schema\ncharacterizing diverse social groups. Specifically, we construct a dataset,\nGFair, encapsulating target-attribute combinations across multiple dimensions.\nIn addition, we introduce statement organization, a new open-ended text\ngeneration task, to uncover complex biases in LLMs. Extensive evaluations of\npopular LLMs reveal inherent safety concerns. To mitigate the biases of LLM\nfrom a group fairness perspective, we pioneer a novel chain-of-thought method\nGF-Think to mitigate biases of LLMs from a group fairness perspective.\nExperimental results demonstrate its efficacy in mitigating bias in LLMs to\nachieve fairness.", + "authors": "Guanqun Bi, Lei Shen, Yuqiang Xie, Yanan Cao, Tiangang Zhu, Xiaodong He", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2308.11483v1", + "title": "Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable capabilities in\nvarious NLP tasks. However, previous works have shown these models are\nsensitive towards prompt wording, and few-shot demonstrations and their order,\nposing challenges to fair assessment of these models. As these models become\nmore powerful, it becomes imperative to understand and address these\nlimitations. In this paper, we focus on LLMs robustness on the task of\nmultiple-choice questions -- commonly adopted task to study reasoning and\nfact-retrieving capability of LLMs. Investigating the sensitivity of LLMs\ntowards the order of options in multiple-choice questions, we demonstrate a\nconsiderable performance gap of approximately 13% to 75% in LLMs on different\nbenchmarks, when answer options are reordered, even when using demonstrations\nin a few-shot setting. Through a detailed analysis, we conjecture that this\nsensitivity arises when LLMs are uncertain about the prediction between the\ntop-2/3 choices, and specific options placements may favor certain prediction\nbetween those top choices depending on the question caused by positional bias.\nWe also identify patterns in top-2 choices that amplify or mitigate the model's\nbias toward option placement. We found that for amplifying bias, the optimal\nstrategy involves positioning the top two choices as the first and last\noptions. Conversely, to mitigate bias, we recommend placing these choices among\nthe adjacent options. To validate our conjecture, we conduct various\nexperiments and adopt two approaches to calibrate LLMs' predictions, leading to\nup to 8 percentage points improvement across different models and benchmarks.", + "authors": "Pouya Pezeshkpour, Estevam Hruschka", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.06056v1", + "title": "METAL: Metamorphic Testing Framework for Analyzing Large-Language Model Qualities", + "abstract": "Large-Language Models (LLMs) have shifted the paradigm of natural language\ndata processing. However, their black-boxed and probabilistic characteristics\ncan lead to potential risks in the quality of outputs in diverse LLM\napplications. Recent studies have tested Quality Attributes (QAs), such as\nrobustness or fairness, of LLMs by generating adversarial input texts. However,\nexisting studies have limited their coverage of QAs and tasks in LLMs and are\ndifficult to extend. Additionally, these studies have only used one evaluation\nmetric, Attack Success Rate (ASR), to assess the effectiveness of their\napproaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)\nframework to address these issues by applying Metamorphic Testing (MT)\ntechniques. This approach facilitates the systematic testing of LLM qualities\nby defining Metamorphic Relations (MRs), which serve as modularized evaluation\nmetrics. The METAL framework can automatically generate hundreds of MRs from\ntemplates that cover various QAs and tasks. In addition, we introduced novel\nmetrics that integrate the ASR method into the semantic qualities of text to\nassess the effectiveness of MRs accurately. Through the experiments conducted\nwith three prominent LLMs, we have confirmed that the METAL framework\neffectively evaluates essential QAs on primary LLM tasks and reveals the\nquality risks in LLMs. Moreover, the newly proposed metrics can guide the\noptimal MRs for testing each task and suggest the most effective method for\ngenerating MRs.", + "authors": "Sangwon Hyun, Mingyu Guo, M. Ali Babar", + "published": "2023-12-11", + "updated": "2023-12-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2303.01248v3", + "title": "Can ChatGPT Assess Human Personalities? A General Evaluation Framework", + "abstract": "Large Language Models (LLMs) especially ChatGPT have produced impressive\nresults in various areas, but their potential human-like psychology is still\nlargely unexplored. Existing works study the virtual personalities of LLMs but\nrarely explore the possibility of analyzing human personalities via LLMs. This\npaper presents a generic evaluation framework for LLMs to assess human\npersonalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically,\nwe first devise unbiased prompts by randomly permuting options in MBTI\nquestions and adopt the average testing result to encourage more impartial\nanswer generation. Then, we propose to replace the subject in question\nstatements to enable flexible queries and assessments on different subjects\nfrom LLMs. Finally, we re-formulate the question instructions in a manner of\ncorrectness evaluation to facilitate LLMs to generate clearer responses. The\nproposed framework enables LLMs to flexibly assess personalities of different\ngroups of people. We further propose three evaluation metrics to measure the\nconsistency, robustness, and fairness of assessment results from\nstate-of-the-art LLMs including ChatGPT and GPT-4. Our experiments reveal\nChatGPT's ability to assess human personalities, and the average results\ndemonstrate that it can achieve more consistent and fairer assessments in spite\nof lower robustness against prompt biases compared with InstructGPT.", + "authors": "Haocong Rao, Cyril Leung, Chunyan Miao", + "published": "2023-03-01", + "updated": "2023-10-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2401.04057v1", + "title": "Unveiling Bias in Fairness Evaluations of Large Language Models: A Critical Literature Review of Music and Movie Recommendation Systems", + "abstract": "The rise of generative artificial intelligence, particularly Large Language\nModels (LLMs), has intensified the imperative to scrutinize fairness alongside\naccuracy. Recent studies have begun to investigate fairness evaluations for\nLLMs within domains such as recommendations. Given that personalization is an\nintrinsic aspect of recommendation systems, its incorporation into fairness\nassessments is paramount. Yet, the degree to which current fairness evaluation\nframeworks account for personalization remains unclear. Our comprehensive\nliterature review aims to fill this gap by examining how existing frameworks\nhandle fairness evaluations of LLMs, with a focus on the integration of\npersonalization factors. Despite an exhaustive collection and analysis of\nrelevant works, we discovered that most evaluations overlook personalization, a\ncritical facet of recommendation systems, thereby inadvertently perpetuating\nunfair practices. Our findings shed light on this oversight and underscore the\nurgent need for more nuanced fairness evaluations that acknowledge\npersonalization. Such improvements are vital for fostering equitable\ndevelopment within the AI community.", + "authors": "Chandan Kumar Sah, Dr. Lian Xiaoli, Muhammad Mirajul Islam", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.SE" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.06852v2", + "title": "ChemLLM: A Chemical Large Language Model", + "abstract": "Large language models (LLMs) have made impressive progress in chemistry\napplications. However, the community lacks an LLM specifically designed for\nchemistry. The main challenges are two-fold: firstly, most chemical data and\nscientific knowledge are stored in structured databases, which limits the\nmodel's ability to sustain coherent dialogue when used directly. Secondly,\nthere is an absence of objective and fair benchmark that encompass most\nchemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that\nfeatures the first LLM dedicated to chemistry. It also includes ChemData, a\ndataset specifically designed for instruction tuning, and ChemBench, a robust\nbenchmark covering nine essential chemistry tasks. ChemLLM is adept at\nperforming various tasks across chemical disciplines with fluid dialogue\ninteraction. Notably, ChemLLM achieves results comparable to GPT-4 on the core\nchemical tasks and demonstrates competitive performance with LLMs of similar\nsize in general scenarios. ChemLLM paves a new path for exploration in chemical\nstudies, and our method of incorporating structured chemical knowledge into\ndialogue systems sets a new standard for developing LLMs in various scientific\nfields. Codes, Datasets, and Model weights are publicly accessible at\nhttps://hf.co/AI4Chem", + "authors": "Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-Sen Zhong, Yuqiang Li", + "published": "2024-02-10", + "updated": "2024-04-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2312.14804v1", + "title": "Use large language models to promote equity", + "abstract": "Advances in large language models (LLMs) have driven an explosion of interest\nabout their societal impacts. Much of the discourse around how they will impact\nsocial equity has been cautionary or negative, focusing on questions like \"how\nmight LLMs be biased and how would we mitigate those biases?\" This is a vital\ndiscussion: the ways in which AI generally, and LLMs specifically, can entrench\nbiases have been well-documented. But equally vital, and much less discussed,\nis the more opportunity-focused counterpoint: \"what promising applications do\nLLMs enable that could promote equity?\" If LLMs are to enable a more equitable\nworld, it is not enough just to play defense against their biases and failure\nmodes. We must also go on offense, applying them positively to equity-enhancing\nuse cases to increase opportunities for underserved groups and reduce societal\ndiscrimination. There are many choices which determine the impact of AI, and a\nfundamental choice very early in the pipeline is the problems we choose to\napply it to. If we focus only later in the pipeline -- making LLMs marginally\nmore fair as they facilitate use cases which intrinsically entrench power -- we\nwill miss an important opportunity to guide them to equitable impacts. Here, we\nhighlight the emerging potential of LLMs to promote equity by presenting four\nnewly possible, promising research directions, while keeping risks and\ncautionary points in clear view.", + "authors": "Emma Pierson, Divya Shanmugam, Rajiv Movva, Jon Kleinberg, Monica Agrawal, Mark Dredze, Kadija Ferryman, Judy Wawira Gichoya, Dan Jurafsky, Pang Wei Koh, Karen Levy, Sendhil Mullainathan, Ziad Obermeyer, Harini Suresh, Keyon Vafa", + "published": "2023-12-22", + "updated": "2023-12-22", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2310.18333v3", + "title": "She had Cobalt Blue Eyes: Prompt Testing to Create Aligned and Sustainable Language Models", + "abstract": "As the use of large language models (LLMs) increases within society, as does\nthe risk of their misuse. Appropriate safeguards must be in place to ensure LLM\noutputs uphold the ethical standards of society, highlighting the positive role\nthat artificial intelligence technologies can have. Recent events indicate\nethical concerns around conventionally trained LLMs, leading to overall unsafe\nuser experiences. This motivates our research question: how do we ensure LLM\nalignment? In this work, we introduce a test suite of unique prompts to foster\nthe development of aligned LLMs that are fair, safe, and robust. We show that\nprompting LLMs at every step of the development pipeline, including data\ncuration, pre-training, and fine-tuning, will result in an overall more\nresponsible model. Our test suite evaluates outputs from four state-of-the-art\nlanguage models: GPT-3.5, GPT-4, OPT, and LLaMA-2. The assessment presented in\nthis paper highlights a gap between societal alignment and the capabilities of\ncurrent LLMs. Additionally, implementing a test suite such as ours lowers the\nenvironmental overhead of making models safe and fair.", + "authors": "Veronica Chatrath, Oluwanifemi Bamgbose, Shaina Raza", + "published": "2023-10-20", + "updated": "2023-12-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2311.18140v1", + "title": "ROBBIE: Robust Bias Evaluation of Large Generative Language Models", + "abstract": "As generative large language models (LLMs) grow more performant and\nprevalent, we must develop comprehensive enough tools to measure and improve\ntheir fairness. Different prompt-based datasets can be used to measure social\nbias across multiple text domains and demographic axes, meaning that testing\nLLMs on more datasets can potentially help us characterize their biases more\nfully, and better ensure equal and equitable treatment of marginalized\ndemographic groups. In this work, our focus is two-fold:\n (1) Benchmarking: a comparison of 6 different prompt-based bias and toxicity\nmetrics across 12 demographic axes and 5 families of generative LLMs. Out of\nthose 6 metrics, AdvPromptSet and HolisticBiasR are novel datasets proposed in\nthe paper. The comparison of those benchmarks gives us insights about the bias\nand toxicity of the compared models. Therefore, we explore the frequency of\ndemographic terms in common LLM pre-training corpora and how this may relate to\nmodel biases.\n (2) Mitigation: we conduct a comprehensive study of how well 3 bias/toxicity\nmitigation techniques perform across our suite of measurements. ROBBIE aims to\nprovide insights for practitioners while deploying a model, emphasizing the\nneed to not only measure potential harms, but also understand how they arise by\ncharacterizing the data, mitigate harms once found, and balance any trade-offs.\nWe open-source our analysis code in hopes of encouraging broader measurements\nof bias in future LLMs.", + "authors": "David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane Dwivedi-Yu, Eleonora Presani, Adina Williams, Eric Michael Smith", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2402.04489v1", + "title": "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning", + "abstract": "Fairness and privacy are two important values machine learning (ML)\npractitioners often seek to operationalize in models. Fairness aims to reduce\nmodel bias for social/demographic sub-groups. Privacy via differential privacy\n(DP) mechanisms, on the other hand, limits the impact of any individual's\ntraining data on the resulting model. The trade-offs between privacy and\nfairness goals of trustworthy ML pose a challenge to those wishing to address\nboth. We show that DP amplifies gender, racial, and religious bias when\nfine-tuning large language models (LLMs), producing models more biased than\nones fine-tuned without DP. We find the cause of the amplification to be a\ndisparity in convergence of gradients across sub-groups. Through the case of\nbinary gender bias, we demonstrate that Counterfactual Data Augmentation (CDA),\na known method for addressing bias, also mitigates bias amplification by DP. As\na consequence, DP and CDA together can be used to fine-tune models while\nmaintaining both fairness and privacy.", + "authors": "Sanjari Srivastava, Piotr Mardziel, Zhikhun Zhang, Archana Ahlawat, Anupam Datta, John C Mitchell", + "published": "2024-02-07", + "updated": "2024-02-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CY", + "stat.ME" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.00884v2", + "title": "Text classification of column headers with a controlled vocabulary: leveraging LLMs for metadata enrichment", + "abstract": "Traditional dataset retrieval systems index on metadata information rather\nthan on the data values. Thus relying primarily on manual annotations and\nhigh-quality metadata, processes known to be labour-intensive and challenging\nto automate. We propose a method to support metadata enrichment with topic\nannotations of column headers using three Large Language Models (LLMs):\nChatGPT-3.5, GoogleBard and GoogleGemini. We investigate the LLMs ability to\nclassify column headers based on domain-specific topics from a controlled\nvocabulary. We evaluate our approach by assessing the internal consistency of\nthe LLMs, the inter-machine alignment, and the human-machine agreement for the\ntopic classification task. Additionally, we investigate the impact of\ncontextual information (i.e. dataset description) on the classification\noutcomes. Our results suggest that ChatGPT and GoogleGemini outperform\nGoogleBard for internal consistency as well as LLM-human-alignment.\nInterestingly, we found that context had no impact on the LLMs performances.\nThis work proposes a novel approach that leverages LLMs for text classification\nusing a controlled topic vocabulary, which has the potential to facilitate\nautomated metadata enrichment, thereby enhancing dataset retrieval and the\nFindability, Accessibility, Interoperability and Reusability (FAIR) of research\ndata on the Web.", + "authors": "Margherita Martorana, Tobias Kuhn, Lise Stork, Jacco van Ossenbruggen", + "published": "2024-03-01", + "updated": "2024-03-05", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "cs.AI", + "cs.IR" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2404.07981v1", + "title": "Manipulating Large Language Models to Increase Product Visibility", + "abstract": "Large language models (LLMs) are increasingly being integrated into search\nengines to provide natural language responses tailored to user queries.\nCustomers and end-users are also becoming more dependent on these models for\nquick and easy purchase decisions. In this work, we investigate whether\nrecommendations from LLMs can be manipulated to enhance a product's visibility.\nWe demonstrate that adding a strategic text sequence (STS) -- a carefully\ncrafted message -- to a product's information page can significantly increase\nits likelihood of being listed as the LLM's top recommendation. To understand\nthe impact of STS, we use a catalog of fictitious coffee machines and analyze\nits effect on two target products: one that seldom appears in the LLM's\nrecommendations and another that usually ranks second. We observe that the\nstrategic text sequence significantly enhances the visibility of both products\nby increasing their chances of appearing as the top recommendation. This\nability to manipulate LLM-generated search responses provides vendors with a\nconsiderable competitive advantage and has the potential to disrupt fair market\ncompetition. Just as search engine optimization (SEO) revolutionized how\nwebpages are customized to rank higher in search engine results, influencing\nLLM recommendations could profoundly impact content optimization for AI-driven\nsearch services. Code for our experiments is available at\nhttps://github.com/aounon/llm-rank-optimizer.", + "authors": "Aounon Kumar, Himabindu Lakkaraju", + "published": "2024-04-11", + "updated": "2024-04-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "LLM Fairness" + }, + { + "url": "http://arxiv.org/abs/2403.05668v1", + "title": "CFaiRLLM: Consumer Fairness Evaluation in Large-Language Model Recommender System", + "abstract": "In the evolving landscape of recommender systems, the integration of Large\nLanguage Models (LLMs) such as ChatGPT marks a new era, introducing the concept\nof Recommendation via LLM (RecLLM). While these advancements promise\nunprecedented personalization and efficiency, they also bring to the fore\ncritical concerns regarding fairness, particularly in how recommendations might\ninadvertently perpetuate or amplify biases associated with sensitive user\nattributes. In order to address these concerns, our study introduces a\ncomprehensive evaluation framework, CFaiRLLM, aimed at evaluating (and thereby\nmitigating) biases on the consumer side within RecLLMs.\n Our research methodically assesses the fairness of RecLLMs by examining how\nrecommendations might vary with the inclusion of sensitive attributes such as\ngender, age, and their intersections, through both similarity alignment and\ntrue preference alignment. By analyzing recommendations generated under\ndifferent conditions-including the use of sensitive attributes in user\nprompts-our framework identifies potential biases in the recommendations\nprovided. A key part of our study involves exploring how different detailed\nstrategies for constructing user profiles (random, top-rated, recent) impact\nthe alignment between recommendations made without consideration of sensitive\nattributes and those that are sensitive-attribute-aware, highlighting the bias\nmechanisms within RecLLMs.\n The findings in our study highlight notable disparities in the fairness of\nrecommendations, particularly when sensitive attributes are integrated into the\nrecommendation process, either individually or in combination. The analysis\ndemonstrates that the choice of user profile sampling strategy plays a\nsignificant role in affecting fairness outcomes, highlighting the complexity of\nachieving fair recommendations in the era of LLMs.", + "authors": "Yashar Deldjoo, Tommaso di Noia", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "LLM Fairness" + } +] \ No newline at end of file