diff --git "a/related_34K/test_related_short_2405.00722v1.json" "b/related_34K/test_related_short_2405.00722v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2405.00722v1.json" @@ -0,0 +1,1431 @@ +[ + { + "url": "http://arxiv.org/abs/2405.00722v1", + "title": "LLMs for Generating and Evaluating Counterfactuals: A Comprehensive Study", + "abstract": "As NLP models become more complex, understanding their decisions becomes more\ncrucial. Counterfactuals (CFs), where minimal changes to inputs flip a model's\nprediction, offer a way to explain these models. While Large Language Models\n(LLMs) have shown remarkable performance in NLP tasks, their efficacy in\ngenerating high-quality CFs remains uncertain. This work fills this gap by\ninvestigating how well LLMs generate CFs for two NLU tasks. We conduct a\ncomprehensive comparison of several common LLMs, and evaluate their CFs,\nassessing both intrinsic metrics, and the impact of these CFs on data\naugmentation. Moreover, we analyze differences between human and LLM-generated\nCFs, providing insights for future research directions. Our results show that\nLLMs generate fluent CFs, but struggle to keep the induced changes minimal.\nGenerating CFs for Sentiment Analysis (SA) is less challenging than NLI where\nLLMs show weaknesses in generating CFs that flip the original label. This also\nreflects on the data augmentation performance, where we observe a large gap\nbetween augmenting with human and LLMs CFs. Furthermore, we evaluate LLMs'\nability to assess CFs in a mislabelled data setting, and show that they have a\nstrong bias towards agreeing with the provided labels. GPT4 is more robust\nagainst this bias and its scores correlate well with automatic metrics. Our\nfindings reveal several limitations and point to potential future work\ndirections.", + "authors": "Van Bach Nguyen, Paul Youssef, J\u00f6rg Schl\u00f6tterer, Christin Seifert", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Large Language Models. LLMs have demonstrated impressive capabilities across a diverse natural language processing tasks, such as question LLM FL UA RS Avg. SA Human Crowd 3.66 2.95 2.58 3.06 Human Expert 3.54 2.69 2.49 2.91 GPT3.5 3.58 2.91 2.65 3.05 GPT4 3.79 3.15 2.91 3.28 LLAMA2 7B 3.60 2.74 2.63 2.99 LLAMA2 70B 3.70 2.75 2.47 2.97 Mistral 7B 3.85 2.84 2.69 3.13 Mistral 56B 2.58 1.74 1.75 2.02 hypothesis Human Crowd 3.81 3.96 3.81 3.86 GPT3.5 3.19 3.93 3.74 3.62 GPT4 3.96 3.98 3.92 3.95 LLAMA2 7B 3.23 3.74 3.66 3.54 LLAMA2 70B 3.49 3.68 3.56 3.58 Mistral 7B 3.50 3.70 3.65 3.62 Mistral 56B 3.02 3.45 3.48 3.32 premise Human Crowd 3.58 3.88 3.86 3.77 GPT3.5 2.51 3.82 3.69 3.34 GPT4 3.68 3.83 3.84 3.78 LLAMA2 7B 2.96 3.38 3.67 3.34 LLAMA2 70B 3.35 3.46 3.67 3.49 Mistral 7B 2.97 3.63 3.74 3.45 Mistral 56B 2.37 3.11 3.49 2.99 Table 8: Scores for evaluation with GPT4. FL refers to flipping label score, UA to unncessary alteration, RS is the realisticness score, and Avg. is the average of the three scores. Best score for each task is in bold. Second best score is underlined. answering, wherein the model needs to retrieve relevant information from its training data and generate a concise response, or text summarization, which distills lengthy texts into concise summaries while retaining crucial information (Maynez et al., 2023). However, the task of CFs generation has not been comprehensively evaluated for LLMs. A large number of LLMs exist, exhibiting variations in model size, architecture, training dataset, the incorporation of human feedback loops and accessibility (open-source or proprietary) (Zhao et al., 2023). Consequently, there is a necessity to conduct comparative evaluations across different models on a standardized task. Since the architectures of the LLMs under consideration are predominantly similar, and the training datasets are either known Compared Values SA NLI FL & FR 0.83 0.92 UA & -TS 0.50 0.75 RS & -PPL 0.62 -0.23 Table 9: Spearman correlations between intrinsic metrics and GPT-4 evaluation scores. PPL and TS scores are negated so that higher is better. LLM/Score 1 2 3 4 GPT3.5 0.70 3.85 69.61 25.84 GPT4 55.50 3.19 2.94 38.37 Table 10: Flip label score distributions on the corrupted set of NLI. Distribution is an average of the distributions on the premise and hypothesis sets. public sources or undisclosed, the primary focus of this study is to compare LLMs that are different in model size, the implementation of human feedback, and accessibility. To enhance the performance of LLMs across various tasks, in-context learning (ICL) techniques have been employed to optimize the prompts provided to these models. Numerous prompt engineering approaches during the inference phase have been proposed, either by selecting the demonstration instances, or formatting the prompt in form of instruction or reasoning steps (Dong et al., 2022). In this study, leverage chainof-thought prompting (CoT) (Wei et al., 2022) and selecting closest instance retrieval strategies(Liu et al., 2022) to optimize the generation process. CFs generation methods. There exists several methods for generating CFs, but most of them are desigend for a specific LLM. The CFs generated by MICE (Ross et al., 2021) are intended for debugging models, and not for data augmentation. Polyjuice (Wu et al., 2021) requires specifying the type of edits that should be conducted, and the resulting CFs should be manually labeled. (Robeer et al., 2021). DISCO (Chen et al., 2023) uses GPT3\u2019s fill-in-the-blanks mode, which is unavailable in most open source LLMs and would require adapting them. CREST (Treviso et al., 2023) depends on a rationalizer module and the editor module is a masked LM that needs to be further trained. Instead, we decided to prompt LLMs to generate CFs by providing instructions and an example. We provide more details in Section 3.2. LLMs for CFs generation (Li et al., 2024) investigated the strengths and weaknesses of LLMs as CFs generators. Additionally, they disclosed the factors that impact LLMs during CFs generation, including both intrinsic properties of LLMs and prompt design considerations. However, this study lacks intrinsic evaluation of CFs and omits comparison with human-generated CFs. Sachdeva et al. (2024) leverage LLMs to generate CFs for extractive question answering, showing that data augmentation with CFs improve OOD performance, and that this improvement correlates with the diversity of the generated CFs. Prior work by Bhattacharjee et al. (2024) investigated the capability of GPT models in generating CFs for explanations by optimizing their prompts. However, their analysis was limited to the GPT family and did not consider downstream tasks or comparison with humangenerated CFs. In this work, we conduct a more comprehensive evaluation of LLMs on multiple aspects, including intrinsic metrics of CFs explanation quality and performance on downstream tasks. Furthermore, we compare the LLM-generated CFs against those produced by humans, and propose a novel approach to evaluate CFs using LLMs.", + "pre_questions": [], + "main_content": "Introduction The growing popularity of artificial intelligence (AI) and increasingly complex \u201cblack-box\u201d models have triggered a critical need for interpretability. As Miller (2019) highlights, explanations often seek to understand why an event P occurred instead of an alternative Q. Ideally, explanations \u2217 Equal contribution Positive: If you haven't seen this, it's incredible. It is pure gold. I saw this about 17 years ago, and I'm still hype about it. Positive:\u00a0If you haven't seen this, it's amazing. It is incredible. I saw this about 17 years ago, and I'm still amazed from it. Positive:\u00a0If you haven\u2019t seen this, it\u2019s terrible. It is pure beauty. I saw this about 17 years ago, and I\u2019m still impressed from it. Positive:\u00a0If you haven't seen this, it's amazing. It is a hidden gem. I saw this about 17 years ago, and I'm still enlightened from it. Negative: If you haven't seen this, it's terrible. It is pure trash. I saw this about 17 years ago, and I'm still screwed up from it. Original 3.3 3.0 3.4 3.5 score Figure 1: Counterfactual for Sentiment Analysis from several LLMs with their evaluation scores from GPT4. should demonstrate how minimal changes to an instance could have led to different outcomes. In the context of textual data, this translates to introducing minimal modifications to the text through word additions, replacements, or deletions, to flip the label assigned by a given classifier. Counterfactual generation in NLP aims to foster an understanding of models, thereby facilitating their improvement (Kaushik et al., 2020), debugging (Ross et al., 2021), or rectification (Balashankar et al., 2023). In the field of NLP, LLMs have consistently demonstrated remarkable performance across diverse tasks. However, despite significant advancements in counterfactual generation methods, the efficacy of LLMs in producing high-quality CFs remains an open question. Our study bridges this gap by rigorously assessing the inherent capability of LLMs to generate CFs and identifying the most effective ones. We conduct a comprehensive comparison of several common LLMs, spanning different sizes and accessibility levels, evaluating their performance specifically on the counterfactual generation task. Our assessment encompasses standard metrics for counterfactual quality, as well as an in-depth evaluation of language fluency tailored to the context of counterfactual generation. Furthermore, we extend our analysis to data augmentation. We consider generating CFs for two arXiv:2405.00722v1 [cs.CL] 26 Apr 2024 NLU tasks in this study: Sentiment Analysis (SA) and Natural Language Inference (NLI). Our analysis demonstrates that LLMs are able to generate fluent text. However, they have difficulties in inducing minimal changes. Generating CFs for SA is less challenging than NLI, where LLMs exhibit weaknesses in generating CFs that flip the labels. For data augmentation, CFs from LLMs can be an alternative to human CFs, as they are able to achieve similar performance, while on NLI further improvements are needed. Furthermore, we show a positive correlation between keeping minimal changes and data augmentation performance. This suggests a new direction to generate improved data for augmentation, potentially leading to more efficient augmentation approaches. We further assess the ability of LLMs to act as evaluators of CFs. By conducting controlled experiments, we show that LLMs have a strong bias to agree with the given labels, even if these are incorrect. Additionally, we show the alignment between GPT4-based evaluation and intrinsic metrics for CFs, indicating that GPT-4 is a reliable evaluator for counterfactual generation. These findings suggest that GPT-4 is a suitable choice of LLM to use when evaluating the quality of generated CFs. A sample of CFs from different LLMs with the corresponding scores is shown in Figure 1. Finally, to facilitate further research, we contribute a new dataset of CFs generated by various LLMs. 2 Evaluation Methodology We conduct a multi-faceted evaluation, considering several use cases where CFs could be beneficial. 2.1 Intrinsic Evaluation Given a fixed classifier f and a dataset with N samples (x1, x2, . . . , xN), x = (z1, z2, . . . , zn) represents a sequence of n tokens with a ground truth label y. A valid counterfactual x\u2032 should: (1) achieve the desired target label y\u2032 with (2) minimal changes, and (3) align with likely feature distributions (Molnar, 2022). Therefore, in this evaluation, we consider the intrinsic properties of Flip Rate, Textual Similarity, and Perplexity that correspond to each criterion, respectively: Flip Rate (FR): measures how effectively a method can change labels of instances with respect to a pretrained classifier. FR is defined as the percentage of generated instances where the labels are flipped over the total number of instances N (Bhattacharjee et al., 2024): FR = 1 N N X i=1 1[f(xi) = y\u2032] Textual Similarity (TS): quantifies the closeness between an original instance and the counterfactual. Lower distances indicate greater similarity. We use the Levenshtein distance for d to quantify the tokendistance between the original instance x and the counterfactual x\u2032. This choice is motivated by the Levenshtein distance\u2019s ability to capture all type of edits (insertions, deletions, or substitutions) and also its widespread use in related work (Ross et al., 2021; Treviso et al., 2023): TS = 1 N N X i=1 d(xi, x\u2032 i) |xi| Perplexity (PPL): To ensure that the generated text is plausible, realistic, and follows a natural text distribution, we leverage perplexity from GPT-2 because of its effectiveness in capturing such distributions. (Radford et al., 2019)1 PPL(x) = exp ( \u22121 n n X i=1 log p\u03b8(zi | z