diff --git "a/related_34K/test_related_short_2404.17475v1.json" "b/related_34K/test_related_short_2404.17475v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.17475v1.json" @@ -0,0 +1,1433 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17475v1", + "title": "CEval: A Benchmark for Evaluating Counterfactual Text Generation", + "abstract": "Counterfactual text generation aims to minimally change a text, such that it\nis classified differently. Judging advancements in method development for\ncounterfactual text generation is hindered by a non-uniform usage of data sets\nand metrics in related work. We propose CEval, a benchmark for comparing\ncounterfactual text generation methods. CEval unifies counterfactual and text\nquality metrics, includes common counterfactual datasets with human\nannotations, standard baselines (MICE, GDBA, CREST) and the open-source\nlanguage model LLAMA-2. Our experiments found no perfect method for generating\ncounterfactual text. Methods that excel at counterfactual metrics often produce\nlower-quality text while LLMs with simple prompts generate high-quality text\nbut struggle with counterfactual criteria. By making CEval available as an\nopen-source Python library, we encourage the community to contribute more\nmethods and maintain consistent evaluation in future work.", + "authors": "Van Bach Nguyen, J\u00f6rg Schl\u00f6tterer, Christin Seifert", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "While terms like \u201ccounterfactual\u201d and \u201ccontrastive\u201d generation are often used interchangeably in literature (Stepin et al., 2021), our work adopts a specific definition. We define counterfactual generation as generating a new instance with different model predictions from the original with minimal changes, encompassing both true counterfactuals and contrastives. This broader category includes counterfactual, contrastive generation, and adversarial attacks. In the past, adversarial attacks focused on changing the label without considering text quality. Recent work like GBDA (Guo et al., 2021) focuses on producing adverserial text that is more natural by adding fluency and semantic similarity losses. Hence, we include GBDA in our benchmark. Technically, counterfactual generation methods for text fall into three categories: Masking and Filling Methods (MF): These methods perform 2 steps: (1) identify important words for masking and use various techniques such as selecting words with the highest gradient or training a separate rationalizer for the masking process. (2) The masked words are then replaced using a pretrained language model with fill-in-blank ability. In step (1), MICE (Ross et al., 2021) and AutoCAD (Wen et al., 2022) use the gradient of the classifier. DoCoGen (Calderon et al., 2022) masks all relevant terms in a domain, while CREST (Treviso et al., 2023) trains a separate rationalizer, i.e., SPECTRA (Guerreiro and Martins, 2021). Then, all of them fine-tune T5 to fill in blanks but train in different ways. Polyjuice (Wu et al., 2021) accepts text with masking and fine-tunes a Roberta-based model to fill in blanks using control codes. Conditional Distribution Methods (CD): Methods like GBDA (Guo et al., 2021) and CFGAN (Robeer et al., 2021) train a conditional distribution for counterfactuals. The counterfactuals are obtained by sampling from this distribution based on a target label. Counterfactual Generation with Large Language Models: Recently, there has been a trend towards using Language Models (LLMs) for counterfactual generation. Approaches like CORE (Dixit et al., 2022), DISCO (Chen et al., 2023) and FLARE (Bhattacharjee et al., 2024) optimize prompts fed into LLMs to generate the desired counterfactuals. This trend is driven by the versatile capabilities of LLMs in various tasks (Maynez et al., 2023). Despite the diverse approaches employed in generating counterfactuals across various studies, the common objective remains to generate high-quality 2 counterfactuals. However, these studies employ different metrics, baselines, and datasets, as illustrated in Table 1. Therefore, given the rapid growth of approaches in this field, establishing a unified evaluation standard becomes paramount. Existing benchmarks for counterfactual generation (Pawelczyk et al., 2021; Moreira et al., 2022) focus exclusively on tabular data with properties that are orthogonal to text (e.g., continuous value ranges). Hence, we introduce CEval to fill this gap and provide a standard evaluation framework specifically tailored to textual counterfactual generation. Our benchmark unifies metrics of both, counterfactual criteria and text quality assessment, including datasets with human annotations and a simple baseline from a large language model.", + "pre_questions": [], + "main_content": "Introduction The growing popularity of AI and increasingly complex \u201cblack-box\u201d models triggered a critical need for interpretability. As Miller (2019) highlights, explanations are often counterfactual, seeking to understand why event P occurred instead of alternative Q. Ideally, explanations should demonstrate how minimal changes in an instance could have led to different outcomes. For example, given a review: The film has funny moments and talented actors, but it feels long. To answer the question why this review has a negative sentiment instead of a positive sentiment, the answer might involve showing a positive example as a counterfactual like: The film has funny moments and talented actors, yet feels a bit long. This example enables us to 1The source code is included with the paper submission and will be publicly accessible upon acceptance. identify specific words that require change and the necessary modifications to achieve the target sentiment. This motivates counterfactual generation, the task of modifying an instance to produce a different model prediction with minimal change. Besides explanation (Robeer et al., 2021), the NLP community also utilizes counterfactual generation for various purposes such as debugging models (Ross et al., 2021), data augmentation (Dixit et al., 2022; Chen et al., 2023; Bhattacharjee et al., 2024) or enhancing model robustness (Treviso et al., 2023; Wu et al., 2021). However, generating counterfactuals is not straightforward due to the complexity of textual changes, involving replacements, deletions, and insertions. Determining where and how to modify text to alter predictions remains an open issue. Existing research efforts unified evaluation standards and often prioritize quantity over quality in generated counterfactuals. Table 1 illustrates the disparity in datasets, metrics, and baselines across different studies. Consequently, it becomes challenging to choose an optimal method for a specific purpose. This highlights the need for a well-defined benchmark to comprehensively evaluate counterfactual generation methods for textual data. Such a benchmark should establish standard datasets, metrics, and baselines, enabling fair and meaningful comparisons. This work introduces CEval a comprehensive benchmark to unify the evaluation of methods that modify text to change a classifier\u2019s prediction. Such methods include contrastive explanations, counterfactual generation, and adversarial attacks. Our benchmark offers multifaceted evaluation: assessing both the \u201ccounterfactual-ness\u201d (e.g., label flipping ability) and textual quality (e.g., fluency, grammar, coherence) of counterfactual texts. Additionally, it comprises carefully curated datasets with human annotations, as well as a straightforward baseline generated using a large language model. Moreover, we systematically review state-of-the1 arXiv:2404.17475v1 [cs.CL] 26 Apr 2024 Method Dataset Metrics Baseline MICE (Ross et al., 2021) IMDB, Race, Newgroups Flip rate, Fluency, Minimality MICE\u2019s variants GBDA (Guo et al., 2021) AG News, Yelp IMDB, MNLI Accuracy, Cosine Similarity, #Queries TextFooler (Jin et al., 2020) BAE (Garg and Ramakrishnan, 2020) BERT-Attack (Li et al., 2020) CF-GAN (Robeer et al., 2021) HATESPEECH, SST-2, SNLI Fidelity Perceptibility Naturalness SEDC (Martens and Provost, 2014) PWWS+ (Ren et al., 2019) Polyjuice (Wu et al., 2021) TextFooler (Jin et al., 2020) Polyjuice (Wu et al., 2021) IMDB, NLI, QQP Diversity, Closeness GPT-2 (Radford et al., 2019) T5 (Raffel et al., 2020) RoBERTa (Liu et al., 2019) Table 1: The variations in datasets, metrics, and baselines used across different methods. art methods in this field and directly compare their performance in our benchmark. We provide an open-source library for both the benchmark and the methods, promoting reproducibility and facilitating further research in this domain. Furthermore, we present a systematic comparison of different methods, highlighting their strengths and weaknesses in generating counterfactual text. We analyze how automatically generated counterfactuals compare to human examples, revealing gaps and opportunities for improvement. We find that counterfactual generation methods often generate text that lacks in quality compared to simple prompt-based LLMs. While the latter may struggle to satisfy counterfactual metrics, they typically exhibit higher text quality. Based on these insights, we suggest exploring combinations of methods as promising directions for further research. We focus on counterfactual generation for textual data, which involves editing existing text with minimal modifications to produce new text that increases the probability of a predefined target label with respect to a black-box model. This process aims to generate a counterfactual, denoted as x\u2032, that alters the model\u2019s predictions compared to the original text x. Formally, given a fixed classifier f and a dataset with N samples (x1, x2, . . . , xN), x = (z1, z2, . . . , zn) represents a sequence of n tokens. The original prediction is denoted as f(x) = y, while the counterfactual prediction is y\u2032 \u0338= y. The counterfactual generation process is represented by a method e : (z1, . . . , zn) \ufffd\u2192(z\u2032 1, . . . , z\u2032 m), ensuring that f(e(x)) = y\u2032. The resulting counterfactual example is x\u2032 = (z\u2032 1, . . . , z\u2032 m) with m tokens. A valid counterfactual instance should satisfy the following criteria (Molnar, 2022): Predictive Probability: A counterfactual instance x\u2032 should closely produce the predefined prediction y\u2032. In other words, the counterfactual text should effectively lead to the desired target label. Textual Similarity: A counterfactual x\u2032 should maintain as much similarity as possible to the original instance x in terms of text distance. This ensures that the generated text remains coherent and contextually aligned with the original. Likelihood in Feature Space: A valid counterfactual instance should exhibit feature values that resemble real-world text, indicating that x\u2032 remains close to a common distribution for text. This distribution can be represented by pretrained large language models trained on massive text corpora. This criterion ensures that the generated text is plausible, realistic and consistent with typical language patterns. Diversity: There are multiple ways to modify the input to reach the desired label. A good counterfactual method should provide various options for changing a text instance to obtain the target label. 3.1 Metrics In CEval, we use two types of metrics: counterfactual metrics, which reflect the counterfactual criteria outlined above, and textual quality metrics, which assess the quality of the generated text, irrespective of its counterfactual properties. 3.1.1 Counterfactual metrics Flip Rate (FR): measures how effectively a method can change labels of instances with respect to a pretrained classifier. This metric represents the binary case of the Predictive Probability criterion, determining whether the label changed or not and is commonly used in the literature (Treviso et al., 2023; Ross et al., 2021). FR is defined as the percentage of generated instances where the labels are flipped over the total number of instances N (Bhattacharjee et al., 2024): FR = 1 N N N \ufffd i=1 \ufffd i=1 1[f(xi) \u0338= f(x\u2032 i)] Probability Change (\u2206P): While the flip rate offers a binary assesment of Predictive Probability, it does not fully capture how closely a counterfactual instance aligns with the desired prediction. Some instances may get really close to the target prediction but still fail to flip the label. To address this limitation, we propose the Probability Change (\u2206P) metric, which quantifies the difference between the probability of the target label for the original instance x and the probability of the target label for the contrasting instance x\u2032. \u2206P = 1 N N N \ufffd i=1 \ufffd i=1 (P(y\u2032 i|xi, f) \u2212P(y\u2032 i|x\u2032 i, f)) Here, P(\u00b7) represents the probability of having the prediction y given the instance x with respect to the classifier f. The expression denotes the difference in the probabilities of the prediction y\u2032 for instances x and x\u2032 according to the classifier f. 3 Token Distance (TD): To measure Textual Similarity, a common metric employed in literature (Ross et al., 2021; Treviso et al., 2023) is the token-level distance d(x, x\u2032), where d represents any edit distance metric. In this work, we use the Levenshtein distance for d to quantify the token-distance between the original instance x and the counterfactual x\u2032. This choice is motivated by the Levenshtein distance\u2019s ability to capture all type of edits (insertions, deletions, or substitutions) and also its widespread use in related work (Ross et al., 2021; Treviso et al., 2023): TD = 1 N N X i=1 d(xi, x\u2032 i) |xi| Perplexity (PPL): To ensure that the generated text is plausible, realistic, and follows a natural text distribution, we leverage perplexity from GPT2 because of its effectiveness in capturing such distributions. (Radford et al., 2019)2 PPL(x) = exp ( \u22121 n n X i=1 log p\u03b8(zi | z