diff --git "a/abs_29K_G/test_abstract_long_2405.01097v1.json" "b/abs_29K_G/test_abstract_long_2405.01097v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01097v1.json" @@ -0,0 +1,87 @@ +{ + "url": "http://arxiv.org/abs/2405.01097v1", + "title": "Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification", + "abstract": "Whistleblowing is essential for ensuring transparency and accountability in\nboth public and private sectors. However, (potential) whistleblowers often fear\nor face retaliation, even when reporting anonymously. The specific content of\ntheir disclosures and their distinct writing style may re-identify them as the\nsource. Legal measures, such as the EU WBD, are limited in their scope and\neffectiveness. Therefore, computational methods to prevent re-identification\nare important complementary tools for encouraging whistleblowers to come\nforward. However, current text sanitization tools follow a one-size-fits-all\napproach and take an overly limited view of anonymity. They aim to mitigate\nidentification risk by replacing typical high-risk words (such as person names\nand other NE labels) and combinations thereof with placeholders. Such an\napproach, however, is inadequate for the whistleblowing scenario since it\nneglects further re-identification potential in textual features, including\nwriting style. Therefore, we propose, implement, and evaluate a novel\nclassification and mitigation strategy for rewriting texts that involves the\nwhistleblower in the assessment of the risk and utility. Our prototypical tool\nsemi-automatically evaluates risk at the word/term level and applies\nrisk-adapted anonymization techniques to produce a grammatically disjointed yet\nappropriately sanitized text. We then use a LLM that we fine-tuned for\nparaphrasing to render this text coherent and style-neutral. We evaluate our\ntool's effectiveness using court cases from the ECHR and excerpts from a\nreal-world whistleblower testimony and measure the protection against\nauthorship attribution (AA) attacks and utility loss statistically using the\npopular IMDb62 movie reviews dataset. Our method can significantly reduce AA\naccuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original\ncontent's semantics.", + "authors": "Dimitri Staufer, Frank Pallas, Bettina Berendt", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY", + "cs.CL", + "cs.HC", + "cs.IR", + "cs.SE", + "H.3; K.4; H.5; K.5; D.2; J.4" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Whistleblowing is essential for ensuring transparency and accountability in\nboth public and private sectors. However, (potential) whistleblowers often fear\nor face retaliation, even when reporting anonymously. The specific content of\ntheir disclosures and their distinct writing style may re-identify them as the\nsource. Legal measures, such as the EU WBD, are limited in their scope and\neffectiveness. Therefore, computational methods to prevent re-identification\nare important complementary tools for encouraging whistleblowers to come\nforward. However, current text sanitization tools follow a one-size-fits-all\napproach and take an overly limited view of anonymity. They aim to mitigate\nidentification risk by replacing typical high-risk words (such as person names\nand other NE labels) and combinations thereof with placeholders. Such an\napproach, however, is inadequate for the whistleblowing scenario since it\nneglects further re-identification potential in textual features, including\nwriting style. Therefore, we propose, implement, and evaluate a novel\nclassification and mitigation strategy for rewriting texts that involves the\nwhistleblower in the assessment of the risk and utility. Our prototypical tool\nsemi-automatically evaluates risk at the word/term level and applies\nrisk-adapted anonymization techniques to produce a grammatically disjointed yet\nappropriately sanitized text. We then use a LLM that we fine-tuned for\nparaphrasing to render this text coherent and style-neutral. We evaluate our\ntool's effectiveness using court cases from the ECHR and excerpts from a\nreal-world whistleblower testimony and measure the protection against\nauthorship attribution (AA) attacks and utility loss statistically using the\npopular IMDb62 movie reviews dataset. Our method can significantly reduce AA\naccuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original\ncontent's semantics.", + "main_content": "INTRODUCTION In recent years, whistleblowers have become \u201ca powerful force\u201d for transparency and accountability, not just in the field of AI [9], but also in other technological domains and across both privateand public-sector organizations. Institutions such as the AI Now Institute [9] or the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems [22] have emphasized the key role of whistleblower protection for societal well-being and often also the organizations\u2019 own interests [21]. However, whistleblowing may be a threat for the organizations whose malfeasance is being revealed; thus (potential) whistleblowers often fear or face retaliation. Computationally-supported anonymous reporting seems to be a way forward, but even if reporting frameworks are sufficiently secure systemand network-wise, the report itself may allow inferences towards the whistleblower\u2019s identity due to its content and the whistleblower\u2019s writing style. Non-partisan organizations such as Whistleblower-Netzwerk e.V. (WBN) provide guidance on concise writing. Our interactions with WBN confirm that whistleblower testimonies often include unnecessary personal details. Existing approaches modifying the texts of such reports appear promising, but they take an overly limited view of anonymity and \u2013 like whistleblower protection laws \u2013 address only parts of the problem. This is detailed in Section 2. To improve on these approaches, we propose, implement, and evaluate a novel classification and mitigation strategy for rewriting texts that puts the whistleblower into the loop of assessing risk and utility. Our contributions are threefold. First (Section 3), we analyse the interleaved contributions of different types of identifiers in arXiv:2405.01097v1 [cs.CY] 2 May 2024 \fStaufer, et al. texts to derive a description of the problem for anonymous whistleblowing in terms of a trade-off between risk (identifiability of the whistleblower) and utility (of the rewritten text retaining sufficient information on the specific event details). We derive a strategy for assigning re-identification risk levels of concern to textual features composed of an automated mapping and an interactive adjustment of concern levels. Second (Section 4), we describe our toolwhich implements this strategy. It applies (i) the word/term-to-concern mapping using natural language processing to produce a sanitized but possibly ungrammatical intermediate text version, (ii) a Large Language Model (LLM) that we fine-tuned for paraphrasing to render this text coherent and style-neutral, and (iii) interactivity to draw on the user\u2019s context knowledge. Third (Section 5), we evaluate the resulting risk-utility trade-off. We measure the protection against authorship attribution attacks and utility loss statistically using an established benchmark dataset and show that it can significantly reduce authorship attribution accuracy while retaining utility. We also evaluate our our tool\u2019s effectiveness in masking direct and quasi-identifiers using the Text Anonymization Benchmark [48] and demonstrate its effectiveness on excerpts from a real-world whistleblower testimony. Section 6 sketches current limitations and future work. Section 7 describes ethical considerations and researchers\u2019 positionality, and it discusses possible adverse impacts. 2 BACKGROUND AND RELATED WORK This section describes the importance of, and threats to, whistleblowing (Section 2.1) and the promises and conceptual and practical challenges of \u201canonymity\u201d in reporting (Section 2.2). We survey related work on the anonymization/de-identification of text and argue why it falls short in supporting whistleblowing (Section 2.3). 2.1 Challenges of Safeguarding Whistleblowers Whistleblowers play a crucial role in exposing wrongdoings like injustice, corruption, and discrimination in organizations [6, 41]. However, their courageous acts often lead to negative consequences, such as subtle harassment and rumors, job loss and blacklisting, and, in extreme cases, even death threats [34, 37, 58]. In Western nations, whistleblowing is largely viewed as beneficial to society [66], leading to protective laws like the US Sarbanes-Oxley Act of 2002 and the European Union\u2019s \u201cWhistleblowing Directive\u201d (Directive 2019/1937). The latter, for example, mandates the establishment of safe reporting channels and protection against retaliation. It also requires EU member states to provide whistleblowers with legal, financial, and psychological support. However, the directive faces criticism for its limitations. Notably, it does not cover all publicsector entities [63, p. 3] and leaves key decisions to member states\u2019 discretion [1, p. 652]. This discretion extends to the absence of mandatory anonymous reporting channels and permits states to disregard cases they consider \u201cclearly minor\u201d, leaving whistleblowers without comprehensive protection for non-material harms like workplace bullying [63, p. 3]. Furthermore, according to White [70], the directive\u2019s sectoral approach and reliance on a list of specific EU laws causes a patchwork of provisions, creating a complex and possibly confusing legal environment, particularly for those sectors impacting human rights and life-and-death situations. Last but not least, organizations often react negatively to whistleblowing due to the stigma of errors, even though recognizing these mistakes would be key to building a culture of responsibility [5, p. 12] and improving organizations and society [69]. The reality for whistleblowers is thus fraught with challenges, from navigating legal uncertainties to dealing with public perception [26, 51, 52], leaving many whistleblowers with no option but to report their findings anonymously [50]. However, \u201canonymous\u201d reporting channels alone do not guarantee anonymity [5]. 2.2 Anonymity, (De-)anonymization, and (De-/Re-)Identification Anonymity is not an alternative between being identified uniquely or not at all, but \u201cthe state of being not identifiable within a set of subjects [with potentially the same attributes], the anonymity set\u201d [46, p.9]. Of the manifold possible approaches towards this goal, state-of-the-art whistleblowing-support software as well as legal protections (where existing) focus on anonymous communications [5]. This, however, does not guarantee anonymous reports. Instead, a whistleblower\u2019s anonymity may still be at risk due to several factors, including: (i) surveillance technology, such as browser cookies, security mechanisms otherwise useful to prevent unauthenticated uses, cameras, or access logs, (ii) the author\u2019s unique writing style, and (iii) the specific content of the message [33]. Berendt and Schiffner [5] refer to the latter as \u201cepistemic non-anonymizability\u201d, i.e., the risk of being identified based on the unique information in a report, particularly when the information is known to only a few individuals. In some cases, this may identify the whistleblower uniquely. Terms and their understanding in the domain of anonymity vary. We use the following nomenclature: anonymization is a modification of data that increases the size of the anonymity set of the person (or other entity) of interest; conversely, de-anonymization decreases it (to some number \ud835\udc58\u22651). De-anonymization to \ud835\udc58= 1, which includes the provision of an identifier (e.g., a proper name), is called re-identification. The removal of some identifying information (e.g., proper names), called de-identification, often but not necessarily leads to anonymization [4, 68]. In structured data, direct identifiers (e.g., names or social security numbers) are unique to an individual, whereas quasi-identifiers like age, gender, or zip code, though not unique on their own, can be combined to form unique patterns. Established mathematical frameworks for quantifying anonymity, such as Differential Privacy (DP) [16], and metrics such as k-anonymity [53], along with their refinements [27, 31], can be used when anonymizing datasets. Unstructured data such as text, which constitutes a vast majority of the world\u2019s data, requires its own safeguarding methods, which fall into two broader categories [28]. The first, NLP-based text sanitization, focuses on linguistic patterns to reduce (re-)identification risk. The second, privacy-preserving data publishing (PPDP), involves methods like noise addition or generalization to comply with pre-defined privacy requirements [15]. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 2.3 Related Work: Text De-Identification and Anonymization, Privacy Models, and Adversarial Stylometry De-identification methods in text sanitization mask identifiers, primarily using named entity recognition (NER) techniques. These methods, largely domain-specific, have been particularly influential in clinical data de-identification, as evidenced, for instance, by the 2014 i2b2/UTHealth shared task [62]. However, they do not or only partially address the risk of indirect re-identification [4, 38]. For example, S\u00e1nchez et al. [55, 56, 57] make the simplifying assumption that replacing noun phrases which are rare in domain-specific corpora or on the web with more general ones offers sufficient protection. Others use recurrent neural networks [12, 30], reinforcement learning [71], support vector machines [65], or pre-trained language models [23] to identify and remove entities that fall into pre-defined categories. However, all of these approaches ignore or significantly underestimate the actual risks of context-based re-identification. More advanced anonymization methods, in turn, also aim to detect and remove identifiers that do not fit into the usual categories of named entities or are hidden within context. For example, Reddy and Knight [49] detect and obfuscate gender, and Adams et al. [2] introduce a human-annotated multilingual corpus containing 24 entity types and a pipeline consisting of NER and co-reference resolution to mask these entities. In a more nuanced approach, Papadopoulou et al. [44] developed a \u201cprivacy-enhanced entity recognizer\u201d that identifies 240 Wikidata properties linked to personal identification. Their approach includes three key measures to evaluate if a noun phrase needs to be masked or replaced by a more general one [43]. The first measure uses RoBERTa [29] to assess how \u201csurprising\u201d an entity is in its context, assuming that more unique entities carry higher privacy risks. The second measure checks if web search results for entity combinations mention the individual in question, indicating potential re-identification risk. Lastly, they use a classifier trained with the Text Anonymization Benchmark (TAB) corpus [48] to predict masking needs based on human annotations. Kleinberg et al.\u2019s [24] \u201cTextwash\u201d employs the BERT model, fine-tuned on a dataset of 3717 articles from the British National Corpus, Enron emails, and Wikipedia. The dataset was annotated with entity tags such as \u201cPERSON_FIRSTNAME\u201d, \u201cLOCATION\u201d, and an \u201cOTHER_IDENTIFYING_ATTRIBUTE\u201d category for indirect reidentification risks, along with a \u201cNONE\u201d category for tokens that are non-re-identifying. A quantitative evaluation (0.93 F1 score for detection accuracy, minimal utility loss in sentiment analysis, and part-of-speech tagging) and its qualitative assessment (82% / 98% success in anonymizing famous / semi-famous individuals) show promise. However, the more recent gpt-3.5-turbo can re-identify 72.6% of the celebrities from Textwash\u2019s qualitative study on the first attempt, highlighting the evolving complexity of mitigating the risk of re-identification in texts [45]. In PPDP, several privacy models for structured data have been adapted for privacy guarantees in text. While most are theoretical [28], \u201cC-sanitise\u201d [54] determines the disclosure risk of a certain term t on a set of entities to protect (C), given background knowledge K, which by default is the probability of an entity co-occurring with a term t in the web. Additionally, DP techniques have been adapted to text, either for generating synthetic texts [20] or for obscuring authorship in text documents [68]. This involves converting text into word embeddings, altering these vectors with DP techniques, and then realigning them to the nearest words in the embedding model [73, 74]. However, \u201cword-level differential privacy\u201d [35] faces challenges: it maintains the original sentence length, limiting variation, and can cause grammatical errors, such as replacing nouns with unrelated adjectives, due to not considering word types. Authorship attribution (AA) systems use stylistic features such as vocabulary, syntax, and grammar to identify an author. State-ofthe-art approaches involve using Support Vector Machines [64, 72], and more recently, fine-tuned LLMs like BertAA [3, 18, 64]. The \u201cValla\u201d benchmark and software package standardizes evaluation methods and includes fifteen diverse datasets [64]. Contrasting this, adversarial stylometry modifies an author\u2019s writing style to reduce AA systems\u2019 effectiveness [61]. Advancements in machine translation [67] have also introduced new methods based on adversarial training [60], though they sometimes struggle with preserving the original text\u2019s meaning. Semi-automated tools, such as \u201cAnonymouth\u201d [36], propose modifications for anonymity in a user\u2019s writing, requiring a significant corpus of the user\u2019s own texts. Moreover, recent advances in automatic paraphrasing using fine-tuned LLMs demonstrated a notable reduction in authorship attribution, but primarily for shorter texts [35]. To the best of our knowledge, there is no \u2013 and maybe there can be no \u2013 complete list of textual features contributing to the reidentification of individuals in text. As Narayanan and Shmatikov [40] highlight, \u201cany attribute can be identifying in combination with others\u201d [p. 3]. In text, we encounter elements like characters, words, and phrases, each carrying varying levels of meaning [19]. Single words convey explicit lexical meaning as defined by a vocabulary (e.g. \u201cemployee\u201d), while multiple words are bound by syntactic rules to express more complex thoughts implicitly in phrases (\u201cyoungest employee\u201d) and sentences (\u201cShe is the youngest employee\u201d). In addition, the European Data Protection Supervisor (EDPS) and Spanish Data Protection Agency (AEPD) [17] state that anonymization can never be fully automated and needs to be \u201ctailored to the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons\u201d [p. 7]. To take these insights and limitations into account, our semiautomated text sanitization tool leverages insights on the removal of identifying information but involves the whistleblower (the user) in the decision-making process. 3 RISK MODELLING AND RISK MITIGATION APPROACH In this section, we derive the problem statement (Section 3.2) from an analysis of different identifier types (Section 3.1). Following an overview of our approach (Section 3.3), we detail the anonymization operations for textual features (Section 3.4) and the automatic assignment of default concern levels (Section 3.5). \fStaufer, et al. 3.1 Identifier Types, Author Identifiability, and Event Details in the Whistleblowing Setting Whistleblowing reports convey information about persons, locations, and other entities. At least some of them need to be identified in order for the report to make any sense. The following fictitious example consists of three possible versions of a report in order to illustrate how different types of identifiers may contribute to the re-identification of the anonymously reporting employee Jane Doe, a member of the Colours and Lacquer group in the company COLOURIFICS. V1 On 24 January 2023, John Smith poured polyurethane resin into the clover-leaf-shaped sink of room R23. V2 After our group meeting on the fourth Tuesday of January 2023, the head of the Colours and Lacquer Group poured a toxin into the sink of room R23. V3 Somebody poured a liquid into a recepticle on some date in a room of the company. In V1, \u201cJohn Smith\u201d is the lexical identifier1 of the COLOURIFICS manager John Smith, as is \u201c24 January 2023\u201d of that date. Like John Smith, room R23 is a unique named entity in the context of the company and also identified lexically. \u201cPolyurethane resin\u201d is the lexical identifiers of a toxin (both are common nouns rather than names of individual instances of their category). The \u201cclover-leaf-shaped\u201d serves as a descriptive identifier of the sink. In V2, John Smith is still identifiable via the descriptive identifier \u201chead of the Colours and Lacquer Group\u201d, at least on 24 January 2023 (reconstructed with the help of a calendar and COLOURIFIC\u2019s personnel files). \u201cOur\u201d group meeting is an indexical identifier that signals that the whistleblower is one of the, say five employees in the Colours and Lacquer Group. The indexical information is explicit in V2 given the background knowledge that only employees in this group were co-present (for example, in the company\u2019s key-card logfiles). The same information may be implicit in V1 (if it can be seen from the company\u2019s organigram who John Smith is and who works in his group). Both versions provide for the inference that Jane Doe or any of her four colleagues must have been the whistleblower. If, in addition, only Jane Doe stayed behind \u201cafter the meeting\u201d, that detail in V2 descriptively identifies her uniquely2. V3 contains only identifiers of very general categories. Many other variants are possible (for example, referencing, in a V4, \u201cthe head of our group\u201d, which would enlarge the search space to all groups that had a meeting in R23 that day). The example illustrates the threats (i)-(iii) of Section 2.2. It also shows that the whistleblower\u2019s \u201canonymity\u201d (or lack thereof) is only one aspect of a more general and graded picture of who and what can be identified directly, indirectly, or not at all \u2013 and what this implies for the whistleblower\u2019s safety as well as for the report\u2019s effectiveness. 1The classification of identifiers is due to Phillips [47]. Note that all types of identifiers can give rise to personal data.. in the sense of the EU\u2019s General Data Protection Regulation (GDPR), Article 4(1): \u201cany information which is related to an identified or identifiable natural person\u201d, or personally identifiable data in the senses used in different US regulations. See [11] for legal aspects in the context of whistleblowing. 2If John Smith knows that only she observed him, she is also uniquely identified in V1, but for the sake of the analysis, we assume that only recorded data/text constitute the available knowledge. Inspired by Domingo-Ferrer\u2019s [14] three types of (data) privacy, we distinguish between the identifiability of the whistleblower Jane Doe (author 3 identifiability \ud835\udc34\ud835\udc56\ud835\udc51) and descriptions of the event or other wrongdoing, including other actors (event details \ud835\udc38\ud835\udc51\ud835\udc61). Given the stated context knowledge, we obtain an anonymity set of size \ud835\udc58= 1 for John Smith in V1 and V2. Jane Doe is in an anonymity set of size \ud835\udc58= 5 or even \ud835\udc58= 1 in V2. In V1, that set may be of size \ud835\udc58= 5 (if people routinely work only within their group) or larger (if they may also join other groups). Thus, the presence of a name does not necessarily entail a larger risk. Both are in an anonymity set containing all the company\u2019s employees at the reported date in V3 (assuming no outsiders have access to company premises). The toxin and the sink may be in a smaller anonymity set in V1 than in V2 or V3, and they could increase further (for example, if only certain employees have access to certain substances). Importantly, the identifiability of people and other entities in \ud835\udc38\ud835\udc51\ud835\udc61can increase the identifiability of the whistleblower. V3 illustrates a further challenge: the misspelled receptacle may be a typical error of a specific employee, and the incorrect placement of the temporal before the spatial information suggests that the writer may be a German or Dutch native speaker. In addition to errors, also correct variants carry information that stylometry can use for authorship attribution, which obviously can have a large effect on \ud835\udc34\ud835\udc56\ud835\udc51. The whistleblower would, on the one hand, want to reduce all such identifiabilities as much as possible. On the other hand, the extreme generalization of V3 creates a meaningless report that neither the company nor a court would follow up on. This general problem can be framed in terms of risk and utility, which will be described next. 3.2 The Whistleblowing Text-Writing Problem: Risk, Utility, And Many Unknowns A potential whistleblower faces the following problem: \u201cmake \ud835\udc34\ud835\udc56\ud835\udc51 as small as possible while retaining as much \ud835\udc38\ud835\udc51\ud835\udc61as necessary\u201d. We propose to address this problem by examining the text and possibly rewriting it. In principle, this is an instance of the oft-claimed trade-off between privacy (or other risk) and utility. In a simple world of known repositories of structured data, one could aim at determining the identifying problem (e.g., by database joins to identify the whistleblower due to some attributive information they reveal about themselves and by multiple joins for dependencies such as managers and teams) and compute how large the resulting anonymity set (or \ud835\udc34\ud835\udc56\ud835\udc51as its inverse) is. Given a well-defined measure of information utility, different points on the trade-off curve would then be welldefined and automatically derivable solutions to a mathematical optimization problem. However, texts offer a myriad of ways to express a given relational information. The space of information that could be crossreferenced, sometimes in multiple steps, is huge and often unknown to the individual. Consequently, in many cases, it is not possible 3We assume that the potential whistleblower is also the author of the report. This is the standard setting. Modifications for the situation in which a trusted third party writes the report on their behalf are the subject of future work. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification to determine the anonymity set size with any mathematical certainty. In addition, setting a threshold could be dangerous: even if the anonymity set is \ud835\udc58> 1, protection is not guaranteed \u2013 for example, the whole department of five people could be fired in retaliation. At the same time, exactly how specific a re-written text needs to be about \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61in order to make the report legally viable 4 cannot be decided without much more context knowledge. For example, the shape of the sink into which a toxic substance is poured probably makes no difference to the illegality, whereas the identity of the substance may affect it. These unknowns have repercussions both for tool design (Section 3.3) and for evaluation design (Section 5.1.1). 3.3 Risk Mitigation Approach and Tool Design: Overview Potential whistleblowers would be ill-served by any fully automated tool that claims to be able to deliver a certain mathematically guaranteed anonymization. Instead, we propose to provide them with a semi-automated tool that does have some \u201canonymity-enhancing defaults\u201d that illustrate with the concrete material how textual elements can be identifying and how they can be rendered less identifying. Our tool starts with the heuristic default assumption that identifiability is potentially always problematic and then lets the user steer our tool by specifying how \u201cconcerning\u201d specific individual elements are and choosing, interactively, the treatment of each of them that appears to give the best combination of \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. By letting the author/user assign these final risk scores in the situated context of the evolving text, we enable them to draw on a maximum of implicit context knowledge. Our approach and tool proceed through several steps. We first determine typical textual elements that can constitute or be part of the different types of identifiers. As can be seen in Table 1, most of them can affect \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. Since identification by name (or, by extension, pronouns that co-reference names) does not even need additional background knowledge and since individuals are more at risk than generics, we classify some textual features as \u201chighly concerning\u201d, others as having \u201cmedium concern\u201d, and the remainder as \u201cpotentially concerning\u201d. We differentiate between two types of proper nouns. Some names refer to typical \u201cnamed entities\u201d, which include, in particular, specific people, places, and organizations, as well as individual dates and currency amounts. These pose particular person-identification risk in whistleblowing scenarios.5 \u201cOther proper nouns\u201d, such as titles of music pieces, books and artworks generally only pose medium risk. For stylometric features, we explicitly categorize outof-vocabulary words, misspelled words, and words that are surprising given the overall topic of the text. Other low-level stylometric features, such as punctuation patterns, average word and sentence length, or word and phrase repetition, are not (and in many cases, such as with character n-gram pattern, cannot be [25]) explicitly identified. Instead, we implicitly/indirectly account for them as a byproduct of the LLM-based rephrasing. For all other parts of 4\u201ca situation in which a plan, contract, or proposal is able to be legally enforced\u201d, https://ludwig.guru/s/legally+viable, retrieved 2024-01-02 5PERSON, GPE (region), LOC (location), EVENT, LAW, LANGUAGE, DATE, TIME, PERCENT, MONEY, QUANTITY, and ORDINAL speech, we propose to use replacement strategies based on dataanonymization operations that are proportional to the risk (Table 2). Given the complexities of natural language and potential context information, the latter two operations are necessarily heuristic; thus, our tool applies the classification and the risk mitigation strategy as a default which can then be adapted by the user. Table 1: Overview of the approach from identifier types to default risk. Identifier Type Textual Feature Aid/Edt Default Risk Lexical Names of named entities \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Lexical Other proper nouns \ud835\udc38\ud835\udc51\ud835\udc61 Medium Indexical Pronouns \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Descriptive Common nouns \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive Modifiers \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive (via pragmatic inferences) Out-of-vocabulary wordsa \ud835\udc34\ud835\udc56\ud835\udc51, (\ud835\udc38\ud835\udc51\ud835\udc61) Medium Misspelled wordsa \ud835\udc34\ud835\udc56\ud835\udc51 Medium Surprising wordsb \ud835\udc34\ud835\udc56\ud835\udc51 Medium Other stylometric features \ud835\udc34\ud835\udc56\ud835\udc51 N/Ac aTreated as noun. bNouns or proper nouns. cNot explicitly specified. Indirectly accounted for through rephrasing. Table 2: Mitigation strategies based on assigned risk (LvC = level of concern, NaNEs = names of named entities, OPNs = other proper nouns, CNs = common nouns, Mods = modifiers, PNs = proper nouns, OSFs = other stylometric features). LvC NaNEs OPNs CNs Mods PNs OSFs High Suppr. Suppr. Suppr. Suppr. Suppr. Pert. Medium Pert. Generl. Generl. Pert. Suppr. Pert. 3.4 Anonymization Operations for Words and Phrases In our sanitization pipeline, we conduct various token removal and replacement operations based on each token\u2019s POS tag and its assigned level of concern (LvC), which can be \u201cpotentially concerning\u201d, \u201cmedium concerning\u201d, or \u201chighly concerning\u201d. Initially, we consider all common nouns, proper nouns, adjectives, adverbs, pronouns, and named entities6 as potentially concerning. Should the user or our automatic LvC estimation (see subsection 3.5) elevate the concern to either medium or high, we apply anonymization operations that are categorized into generalization, perturbation, and suppression. Specific implementation details are elaborated on in section 4. 6By this, we mean names of named entities, e.g. \u201cBerlin\u201d for GPE, but we use named entities instead for consistency with other literature. \fStaufer, et al. 3.4.1 Generalization. The least severe type of operation targets common nouns and other proper nouns marked as medium concerning. We assume their specificity (not necessarily their general meaning) poses re-identification risks. Thus, more general terms can be used to preserve meaning while mitigating the risk of re-identification. \u2022 Common nouns like \u201ccar\u201d are replaced with hypernyms from WordNet, such as \u201cvehicle\u201d. \u2022 Other proper nouns become broader Wikidata terms, e.g. \u201cpolitical slogan\u201d for \u201cMake America Great Again\u201d. 3.4.2 Perturbation. This applies to modifiers7 and named entities annotated as medium concerning. In this process, original words are retained but are assigned zero weight in the paraphrase generation, along with their synonyms and inflections. This approach relies on the LLM to either (a) find similar but non-synonymous replacement words or (b) completely rephrase the sentence to exclude these words. For example, \u201cMicrosoft, the giant tech company, ...\u201d could be paraphrased as \u201cA leading corporation in the technology sector...\u201d. 3.4.3 Suppression. The most severe type of operation is applied to common nouns, other proper nouns, modifiers and named entities annotated as highly concerning, and to pronouns that are either medium concerning or highly concerning. We assume these words are either too unique or cannot be generalized. \u2022 For common nouns and other proper nouns, dependent phrases are omitted (e.g., \u201cWe traveled to the London Bridge in a bus.\u201d becomes \u201cWe traveled in a bus.\u201d). \u2022 Modifiers are removed (e.g., \u201cHe used to be the principal dancer\u201d becomes \u201cHe used to be a dancer\u201d). \u2022 Named entities are replaced with nondescript phrases (e.g., \u201cBarack Obama\u201d becomes \u201ccertain person\u201d). \u2022 Pronouns are replaced with \u201csomebody\u201d (e.g., \u201cHe drove the bus.\u201d becomes \u201cSomebody drove the bus.\u201d). 3.5 Automatic Level of Concern (LvC) Estimation In our whistleblowing context, we deem the detection of outsidedocument LvC via search engine queries, as proposed by Papadopoulou et al. [44] (refer to related work in 2.3), impractical. This is because whistleblowers are typically not well-known, and the information they disclose is often novel, not commonly found on the internet. Therefore, instead of relying on external data, we focus on innerdocument LvC, setting up a rule-based system and allowing users to adjust the LvC based on their contextual knowledge. Further, we assume that this pre-annotation of default concern levels raises awareness for potential sources of re-identification. \u2022 Common nouns and modifiers, by default, are potentially concerning. As fundamental elements in constructing a text\u2019s semantic understanding, they could inadvertently reveal re-identifying details like profession or location. However, without additional context, their LvC is not definitive. \u2022 Other proper nouns, unexpected words, misspelled words and out-of-vocabulary words default to medium 7The current version of our tool considers only adjectives and adverbs as modifiers. concerning. Unlike categorized named entities, other proper nouns only indirectly link to individuals, places, or organizations. Unexpected words may diminish anonymity, according to Papadopoulou et al. [44], while misspelled or out-ofvocabulary words can be strong stylometric indicators. \u2022 Named entities are considered highly concerning by default, as they directly refer to specific entities in the world, like people, organizations, or locations, posing a significant re-identification risk. 4 IMPLEMENTATION Our semi-automated text sanitization tool consists of a sanitization pipeline (Sections 4.1 and 4.2) and a user interface (Section 4.3). The pipeline uses off-the-shelf Python NLP libraries (spaCy, nltk, lemminflect, constituent_treelib, sentence-transformers) and our paraphrasing-tuned FLAN T5 language model. FLAN T5\u2019s errorcorrecting capabilities [39, 42] aid in reconstructing sentence fragments after words or phrases with elevated levels of concern have been removed. The user interface is built with standard HTML, CSS, and JavaScript. Both components are open source and on GitHub8. 4.1 Anonymization Operations for Words and Phrases 4.1.1 Generalization. Common nouns undergo generalization by first retrieving their synsets and hypernyms from WordNet, followed by calculating the cosine similarity of their sentence embeddings with those of the hypernyms. This calculation ranks the hypernyms by semantic similarity to the original word, enabling the selection of the most suitable replacement. By default, we select the closest hypernym. Other proper nouns are generalized as follows: We first query Wikipedia to identify the term, using the allmpnet-base-v2 sentence transformer to disambiguate its meaning through cosine similarity. Next, we find the most relevant Wikidata QID and its associated hierarchy. We then flatten these relationships and replace the entity with the next higher-level term in the hierarchy. 4.1.2 Perturbation. We add randomness to modifiers and named entities through LLM-based paraphrasing, specifically, by using the FLAN-T5 language model, which we fine-tuned for paraphrase generation (Section 4.2). To achieve perturbation9, we give the tokens in question and their synonyms and inflections zero weight during next token prediction. This forces the model to either use a less probable word (controlled by the temperature hyperparameter) or rephrase the sentence to omit the token. Using a LLM for paraphrase generation has the added benefit that it mends fragmented sentences caused by token suppression and yields a neutral writing style, adjustable through the no_repeat_ngram_size hyperparameter. 8https://github.com/dimitristaufer/Semi-Automated-Text-Sanitization 9The strategies \u201csuppression\u201d and \u201cgeneralization\u201d are straightforward adaptations of the classical methods for structured data. Perturbation \u201creplaces original values with new ones by interchanging, adding noise or creating synthetic data\u201d [7]. Interchanging would create ungrammatical texts, and noise can only be added to certain data. We, therefore, generate synthetic data via LLM-Rephrasing, disallowing the highly specific words / terms and their synonyms while producing a new but grammatical text. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 4.1.3 Suppression. Common nouns and other proper nouns are suppressed by removing the longest phrase containing them with the constituent_treelib library. Sentences with just one noun or proper noun are entirely removed. Otherwise, the longest phrase, be it a main clause, verb phrase, prepositional phrase, or noun phrase, is identified, removed, and replaced with an empty string. Modifiers are removed (e.g., \u201cHe is their principal dancer\u201d \u2192\u201cHe is their \u00b7 dancer\u201d). Pronouns are replaced with the static string \u201csomebody\u201d. For example, \u201cHis apple\u201d \u2192\u201cSomebody apple\u201d (after replacement) \u2192\u201cSomebody\u2019s apple\u201d (after paraphrase generation). Named entities are replaced with static phrases based on their type. For example, \u201cJohn Smith sent her 2 Million Euros from his account in Switzerland\u201d \u2192\u201ccertain person sent somebody certain money from somebody account in certain location\u201d (after suppressing pronouns and named entities) \u2192\u201cA certain individual sent a specific amount of money to whoever\u2019s account in some particular place\u201d (after paraphrase generation). 4.2 Paraphrase Generation We fine-tuned two variants of the FLAN T5 language models, FLAN T5Base and FLAN T5XL, using the \u201cchatgpt-paraphrases\u201d dataset, which uniquely combines three large paraphrasing datasets for varied topics and sentence types. It includes question paraphrasing from the \u201cQuora Question Pairs\u201d dataset, context-based paraphrasing from \u201cSQuAD2.0\u201d, and summarization-based paraphrases from the \u201cCNN-DailyMail News Text Summarization\u201d dataset. Furthermore, it was enriched with five diverse paraphrase variants for each sentence pair generated by the gpt-3.5-turbo model, resulting in 6.3 million unique pairs. This diversity enhances our model\u2019s paraphrasing capabilities and reduces overfitting. For training, we employed Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation), which adapts the model to new data without the need for complete retraining. We quantized the model weights to enhance memory efficiency using bitsandbytes. We trained FLAN T5Base on a NVIDIA A10G Tensor Core GPU for one epoch (35.63 hours) on 1 mio. paraphrase pairs, using an initial learning rate of 1e-3. After one epoch, we achieved a minimum Cross Entropy loss of 1.195. FLAN T5XL was trained for one epoch (22.38 hours) on 100,000 pairs and achieved 0.88. For inference, we configure max_length to 512 tokens to cap the output at T5\u2019s tokenization limit. do_sample is set to True, allowing for randomized token selection from the model\u2019s probability distribution, enhancing the variety of paraphrasing. Additionally, parameters like temperature, no_repeat_ngram_size, and length_penalty are adjustable via the user interface, providing control over randomness, repetition avoidance, and text length. 4.3 User Interface Our web-based user interface communicates with the sanitization pipeline via Flask endpoints. It visualizes token LvCs (gray, yellow, red), allows dynamic adjustments of these levels, and starts the sanitization process. Moreover, a responsive side menu allows users to select the model size and tune hyperparameters for paraphrasing. The main window (Figure 1) shows the original and the sanitized texts, with options for editing and annotating. Figure 1: The UI\u2019s main window showing the input text (left) and the sanitized text (right). We made up the input and converted it to \u201cInternet Slang\u201d (https://www.noslang.com/ reverse) to showcase how an extremely obvious writing style is neutralized. 5 EVALUATION We evaluate our tool quantitatively (Sections 5.1 and 5.2) and demonstrate its workings and usefulness with an example from a realworld whistleblower testimony (Section 5.3). They complement each other in that the first focuses on identification via writing style and the second two on identification via content. 5.1 Re-Identification Through Writing Style: IMDb62 Movie Reviews Dataset 5.1.1 Evaluation metrics. The large unknowns of context knowledge imply that evaluations cannot rely on straightforward measurement methods for \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. We, therefore, work with the following proxies. Text-surface similarities To understand the effect of language model size and hyperparameter settings on lexical and syntactic variations from original texts, we utilize two ROUGE scores: ROUGE-L (Longest Common Subsequence) to determine to which extent the overall structure and sequence of information in the text changes. And ROUGE-S (Skip-Bigram) to measure word pair changes and changes in phrasings. Risk Without further assumptions about the (real-world casespecific) background knowledge, it is impossible to exactly quantify the ultimate risk of re-identification (see Section 3.1). We therefore only measure the part of \ud835\udc34\ud835\udc56\ud835\udc51where (a) the context knowledge is more easily circumscribed (texts from the same author) and (b) benchmarks are likely to generalize across case studies: the risk of re-identification based on stylometric features, measured as authorship attribution accuracy (AAA). Utility It is also to be expected that the rewriting reduces \ud835\udc38\ud835\udc51\ud835\udc61, yet again it is impossible to exactly determine (without realworld case-specific background knowledge and legal assessment) whether the detail supplied is sufficient to allow for legal follow-up of the report or even only to create alarm that could then be followed up. We, therefore, measure \ud835\udc38\ud835\udc51\ud835\udc61utility through two proxies: a semantic similarity measure and a sentiment classifier. To estimate semantic similarity (SSim), we calculate the cosine similarity of both texts\u2019 sentence \fStaufer, et al. embeddings using the SentenceTransformer10 Python framework. To determine the absolute sentiment score difference (SSD), we classify the texts\u2019 sentiment using an off-the-shelf BERT-based classifier11 from Hugging Face Hub. All measures are normalized to take on values between 0 and 1, and although the absolute values of the scores between these endpoints (except for authorship attribution) cannot be interpreted directly, the comparison of relative orders and changes will give us a first indication of the impacts of different rewriting strategies on \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. 5.1.2 Data, language models, and settings. We investigate protection against authorship attribution attacks with the popular IMDb62 movie reviews dataset [59], which contains 62,000 movie reviews by 62 distinct authors. We assess AAA using the \u201cValla\u201d software package [64], specifically its two most effective models: one based on character n-grams and the other on BERT. This approach covers both ends of the the authorship attribution spectrum [3], from lowlevel, largely topic-independent character n-grams to the contextrich features of the pre-trained BERT model. The evaluation was conducted on AWS EC2 \u201cg4dn.xlarge\u201d instances with NVIDIA T4 GPUs. We processed 130 movie reviews for each of the 62 authors across twelve FLAN T5 configurations, totaling 96,720 texts with character counts spanning from 184 to 5248. Each review was sanitized with its textual elements assigned their default LvCs (see 3.5). Both model sizes, \u201cBase\u201d (250M parameters) and \u201cXL\u201d (3B parameters) were tested with temperature values T of 0.2, 0.5, and 0.8, as well as with no_repeat_ngram_size (NRNgS) set to 0 or 2. The former, temperature, controls the randomness of the next-word predictions by scaling the logits before applying softmax, which makes the predictions more or less deterministic. For our scenario, this causes smaller or greater perturbation of the original text\u2019s meaning. The latter, NRNgS, disallows n consecutive tokens to be repeated in the generated text, which for our scenario means deviating more or less from the original writing style. The Risk-utility trade-offs of all configurations are compared to three baselines: \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 is the original text. In \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, similar to state-of-the-art related work [24, 44], we only redact named entities by replacing them with placeholders, such as \u201c[PERSON]\u201d and do not utilize our language model. Similarly, in \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523 we only remove named entities but rephrase the texts using our bestperforming model configuration regarding AA protection. 5.1.3 Results. The n-gram-based and BERT-based \u201cValla\u201d classifiers achieved AAA baselines of 98.81% and 98.80%, respectively. As expected, the AAA and text-surface similarities varied significantly depending on the model configuration. The XL-model generated texts with much smaller ROUGE-L and ROUGE-S scores, i.e. more lexical and syntactic deviation from the original texts. Using \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 slightly decreased AAA in all configurations while not significantly affecting semantic similarity, which is why we use this for all the following results. 10all-mpnet-base-v2 11bert-base-multilingual-uncased 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Semantic Similarity (SSim) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (a) Risk-utility trade-off between AAA and SSim. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.00 0.05 0.10 0.15 Sentiment Score Difference (SSD) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (b) Risk-utility trade-off between AAA and SSD. Figure 2: Risk-utility trade-offs. Figure 2 (a) shows the risk-utility trade-off between AAA and SSim. \u201cTop-left\u201d (0,1) would be the fictitious best result. For each model configuration, increasing \ud835\udc47caused AAA to drop but also decreased utility by \u223c8%/4% (BASE/XL) for SSim and \u223c12%/3% (BASE/XL) for SSD. The figure shows that the investigated settings create a trade-off curve, with XL (\ud835\udc47= 0.8, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2) allowing for a large reduction in AAA (to 31.22%, as opposed to the original text \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 of 98.81%), while BASE (\ud835\udc47= 0.2, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 0) retains the most SSim (0.731, as opposed to the original texts, which have \ud835\udc46\ud835\udc46\ud835\udc56\ud835\udc5a= 1 to themselves). Figure 2 (b) shows the risk-utility trade-off between AAA and SSD (the plot shows 1-SSD to retain \u201ctop left\u201d as the optimal point). The results mirror those of AAA-SSim, except for \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522: because only named entities (not considered sentiment-carrying) are removed, the sentiment score changes only minimally. 5.1.4 Discussion. In summary, all our models offer a good compromise between baselines representing state-of-the-art approaches. They have lower risk and higher or comparable utility compared to \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523, where only named entities are removed. This indicates \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification the effectiveness of LLM-based rephrasing in authorship attribution. \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as reflected by low SSim scores. 5.2 Re-Identification Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not differentiating between identifier types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plaintiff (also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56) to measure privacy protection and token-levelweighted precision (\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are.\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure \ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56, we fall back to our previously introduced proxies for measuring \ud835\udc38\ud835\udc51\ud835\udc61utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identifiers, are likely to present none or far fewer of the backgroundknowledge-specific re-identification challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud835\udc34\ud835\udc56\ud835\udc51using \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same influence on \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56as failing to remove \u201cUSA\u201d. Utility We estimate \ud835\udc38\ud835\udc51\ud835\udc61through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and settings. The TAB corpus comprises the first two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and SSim vary slightly, but not significantly for different T values. For T = 0.2, we get an entitylevel recall on quasi-identifiers (\ud835\udc38\ud835\udc45\ud835\udc5e\ud835\udc56) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identifiers \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identifiers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly differently (e.g. \u201cMr Abdisamad Adow Sufi\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model configurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the specific text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it affects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identification Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identifiers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \ud835\udc38\ud835\udc51\ud835\udc61upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identifiability (\ud835\udc34\ud835\udc56\ud835\udc51). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \ud835\udc38\ud835\udc51\ud835\udc61and \ud835\udc34\ud835\udc56\ud835\udc51. Based on the information from the original texts in tables 3 and 4 alone, we define \ud835\udc38\ud835\udc51\ud835\udc61as follows, with \ud835\udc38\ud835\udc51\ud835\udc611, \ud835\udc38\ud835\udc51\ud835\udc612 being a subset of excerpt 1 and \ud835\udc38\ud835\udc51\ud835\udc613 a subset of excerpt 2. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil the e\uffffectiveness of LLM-based rephrasing in authorship attribution. \u232b0B4;8=42, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as re\uffffected by low SSim scores. 5.2 Re-Identi\uffffcation Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not di\ufffferentiating between identi\uffffer types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plainti\uffff(also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\u21e2'38/@8) to measure privacy protection and token-levelweighted precision (, %38+@8) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are., %38+@8 uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure , %38+@8, we fall back to our previously introduced proxies for measuring \u21e23C utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identi\uffffers, are likely to present none or far fewer of the backgroundknowledge-speci\uffffc re-identi\uffffcation challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud43483 using \u21e2'38/@8 and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same in\uffffuence on \u21e2'38/@8 as failing to remove \u201cUSA\u201d. Utility We estimate \u21e23C through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and se\uffffings. The TAB corpus comprises the \uffffrst two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \u21e2'38/@8 and SSim vary slightly, but not signi\uffffcantly for di\ufffferent T values. For T = 0.2, we get an entitylevel recall on quasi-identi\uffffers (\u21e2'@8) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identi\uffffers \u21e2'38 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identi\uffffers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly di\ufffferently (e.g. \u201cMr Abdisamad Adow Su\uffff\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model con\uffffgurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the speci\uffffc text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it a\uffffects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identi\uffffcation Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identi\uffffers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \u21e23C upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identi\uffffability (\ud43483). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \u21e23C and \ud43483. Based on the information from the original texts in tables 3 and 4 alone, we de\uffffne \u21e23C as follows, with \u21e23C1, \u21e23C2 being a subset of excerpt 1 and \u21e23C3 a subset of excerpt 2. \u21e23C := 8 > > > > > > > > < > > > > > > > > : \u201cThe Tax Division approved charges but for no apparent reason changed their decision to a declination.\u201d, \u201cThe declination occurred after signi\uffffcant e\uffffort was put into the investigation by the whistleblower.\u201d, \u201cIn their e\uffffort in doing what is right, the whistleblower su\uffffered on a professional and personal level.\u201d 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our \uffffctitious Ex. 1) or not (as here). 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our fictitious Ex. 1) or not (as here). \fStaufer, et al. In \ud835\udc52\ud835\udc65\ud835\udc501 (Table 3), we classified \u201cjoining the case\u201d (first-person indexical) and implications of a nation-wide investigation as highly concerning. Additionally, we marked all \u201ccase\u201d mentions as highly concerning to evaluate consistent suppression. \u201cDOJ Tax\u201d, being a stylometric identifier because it is no official abbreviation, received a medium LvC, and \u201cthousands of hours\u201d was similarly categorized, potentially indicating the authors role as lead in the case. In \ud835\udc52\ud835\udc65\ud835\udc502 (Table 4), we classified the lexical identifier \u201c2018\u201d, which could be cross-referenced relatively easily, as well as all descriptive identifiers concerning the author\u2019s sexual orientation and outing as highly concerning. Furthermore, emotional descriptors (\u201csleep, vacations, gray hairs, et cetera\u201d) are given medium LvC, similar to references of case investment (\u201cthousands of hours\u201d and \u201c95 percent\u201d), mirroring the approach from \ud835\udc52\ud835\udc65\ud835\udc501. 5.3.3 Results and Discussion. \ud835\udc38\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51retains \ud835\udc38\ud835\udc51\ud835\udc612, but not \ud835\udc38\ud835\udc51\ud835\udc611, as \u201cDOJ Tax\u201d is replaced with \u201cproper noun\u201d due to the nonexistence of a corresponding entity in Wikidata. Consequently, it defaults to the token\u2019s POS tag. For \ud835\udc34\ud835\udc56\ud835\udc51, all identified risks were addressed (e.g., \u201cconsiderable time\u201d replaces \u201cthousands of hours.\u201d). However, the generalization of \u201ccase\u201d led to inconsistent terms like \u201cmatter\u201d, \u201csituation\u201d, and \u201cissue\u201d due to the \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 setting. This is beneficial for reducing authorship attribution accuracy but may confuse readers not familiar with the original context. \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51maintains parts of \ud835\udc38\ud835\udc51\ud835\udc613, though terms like \u201cX amount of time\u201d and \u201cY amount of the investigation\u201d add little value due to their lack of specificity. Notably, \u201camount o of\u201d represents a rare LLM-induced spelling error, underscoring the need for human editing for real-world use. The emotional state\u2019s broad generalization to \u201cphysical health, leisure, grey body covering\u201d is odd and less suitable than a singular term would be. Despite this, \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51 effectively minimizes \ud835\udc34\ud835\udc56\ud835\udc51by addressing all other identified risks. Table 3: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc501 (excerpt 1) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cPrior to joining the case, DOJ Tax had approved tax charges for the case and the case was in the process of progressing towards indictment [...] After working thousands of hours on that captive case, poring over evidence, interviewing witnesses all over the U.S., the decision was made by DOJ Tax to change the approval to a declination and not charge the case.\u201d Lexical IDs: DOJ Tax; U.S. Indexical IDs: [implicit: me] joining the case (first person) Descriptive IDs: interviewing witnesses all over the U.S. (nationwide investigation); thousands of hours (author involvement) Sanitized: \u201cThe proper noun had approved tax charges for the matter and the situation was moving towards indictment, but after spending considerable time on that captive matter, poring over evidence, the decision was made by proper noun to defer the approval and not charge the issue.\u201d Table 4: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc502 (excerpt 2) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cI had opened this investigation in 2018, have spent thousands of hours on the case, worked to complete 95 percent of the investigation, have sacrificed sleep, vacations, gray hairs, et cetera. My husband and I, in identifying me as the case agent, were both publicly outed and ridiculed on social media due to our sexual orientation.\u201d Lexical IDs: 2018; thousands of hours; 95 percent Indexical IDs: me as the case agent (role of author); My husband (author\u2019s marital status) Descriptive IDs: I had opened this investigation in 2018 (can be cross-referenced); My husband and I + publicly outed and ridiculed [...] due to our sexual orientation (author\u2019s sexual orientation and public event); sacrificed sleep, [...], gray hairs (emotional state) Sanitized: \u201cI had opened this investigation on a certain date, had spent X amount of time on the case, worked to complete Y amount of the investigation, sacrificing my physical health, leisure, grey body covering, etc.\u201d 6", + "additional_graph_info": { + "graph": [ + [ + "Dimitri Staufer", + "Frank Pallas" + ], + [ + "Dimitri Staufer", + "Bettina Berendt" + ] + ], + "node_feat": { + "Dimitri Staufer": [ + { + "url": "http://arxiv.org/abs/2405.01097v1", + "title": "Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification", + "abstract": "Whistleblowing is essential for ensuring transparency and accountability in\nboth public and private sectors. However, (potential) whistleblowers often fear\nor face retaliation, even when reporting anonymously. The specific content of\ntheir disclosures and their distinct writing style may re-identify them as the\nsource. Legal measures, such as the EU WBD, are limited in their scope and\neffectiveness. Therefore, computational methods to prevent re-identification\nare important complementary tools for encouraging whistleblowers to come\nforward. However, current text sanitization tools follow a one-size-fits-all\napproach and take an overly limited view of anonymity. They aim to mitigate\nidentification risk by replacing typical high-risk words (such as person names\nand other NE labels) and combinations thereof with placeholders. Such an\napproach, however, is inadequate for the whistleblowing scenario since it\nneglects further re-identification potential in textual features, including\nwriting style. Therefore, we propose, implement, and evaluate a novel\nclassification and mitigation strategy for rewriting texts that involves the\nwhistleblower in the assessment of the risk and utility. Our prototypical tool\nsemi-automatically evaluates risk at the word/term level and applies\nrisk-adapted anonymization techniques to produce a grammatically disjointed yet\nappropriately sanitized text. We then use a LLM that we fine-tuned for\nparaphrasing to render this text coherent and style-neutral. We evaluate our\ntool's effectiveness using court cases from the ECHR and excerpts from a\nreal-world whistleblower testimony and measure the protection against\nauthorship attribution (AA) attacks and utility loss statistically using the\npopular IMDb62 movie reviews dataset. Our method can significantly reduce AA\naccuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original\ncontent's semantics.", + "authors": "Dimitri Staufer, Frank Pallas, Bettina Berendt", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY", + "cs.CL", + "cs.HC", + "cs.IR", + "cs.SE", + "H.3; K.4; H.5; K.5; D.2; J.4" + ], + "main_content": "INTRODUCTION In recent years, whistleblowers have become \u201ca powerful force\u201d for transparency and accountability, not just in the field of AI [9], but also in other technological domains and across both privateand public-sector organizations. Institutions such as the AI Now Institute [9] or the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems [22] have emphasized the key role of whistleblower protection for societal well-being and often also the organizations\u2019 own interests [21]. However, whistleblowing may be a threat for the organizations whose malfeasance is being revealed; thus (potential) whistleblowers often fear or face retaliation. Computationally-supported anonymous reporting seems to be a way forward, but even if reporting frameworks are sufficiently secure systemand network-wise, the report itself may allow inferences towards the whistleblower\u2019s identity due to its content and the whistleblower\u2019s writing style. Non-partisan organizations such as Whistleblower-Netzwerk e.V. (WBN) provide guidance on concise writing. Our interactions with WBN confirm that whistleblower testimonies often include unnecessary personal details. Existing approaches modifying the texts of such reports appear promising, but they take an overly limited view of anonymity and \u2013 like whistleblower protection laws \u2013 address only parts of the problem. This is detailed in Section 2. To improve on these approaches, we propose, implement, and evaluate a novel classification and mitigation strategy for rewriting texts that puts the whistleblower into the loop of assessing risk and utility. Our contributions are threefold. First (Section 3), we analyse the interleaved contributions of different types of identifiers in arXiv:2405.01097v1 [cs.CY] 2 May 2024 \fStaufer, et al. texts to derive a description of the problem for anonymous whistleblowing in terms of a trade-off between risk (identifiability of the whistleblower) and utility (of the rewritten text retaining sufficient information on the specific event details). We derive a strategy for assigning re-identification risk levels of concern to textual features composed of an automated mapping and an interactive adjustment of concern levels. Second (Section 4), we describe our toolwhich implements this strategy. It applies (i) the word/term-to-concern mapping using natural language processing to produce a sanitized but possibly ungrammatical intermediate text version, (ii) a Large Language Model (LLM) that we fine-tuned for paraphrasing to render this text coherent and style-neutral, and (iii) interactivity to draw on the user\u2019s context knowledge. Third (Section 5), we evaluate the resulting risk-utility trade-off. We measure the protection against authorship attribution attacks and utility loss statistically using an established benchmark dataset and show that it can significantly reduce authorship attribution accuracy while retaining utility. We also evaluate our our tool\u2019s effectiveness in masking direct and quasi-identifiers using the Text Anonymization Benchmark [48] and demonstrate its effectiveness on excerpts from a real-world whistleblower testimony. Section 6 sketches current limitations and future work. Section 7 describes ethical considerations and researchers\u2019 positionality, and it discusses possible adverse impacts. 2 BACKGROUND AND RELATED WORK This section describes the importance of, and threats to, whistleblowing (Section 2.1) and the promises and conceptual and practical challenges of \u201canonymity\u201d in reporting (Section 2.2). We survey related work on the anonymization/de-identification of text and argue why it falls short in supporting whistleblowing (Section 2.3). 2.1 Challenges of Safeguarding Whistleblowers Whistleblowers play a crucial role in exposing wrongdoings like injustice, corruption, and discrimination in organizations [6, 41]. However, their courageous acts often lead to negative consequences, such as subtle harassment and rumors, job loss and blacklisting, and, in extreme cases, even death threats [34, 37, 58]. In Western nations, whistleblowing is largely viewed as beneficial to society [66], leading to protective laws like the US Sarbanes-Oxley Act of 2002 and the European Union\u2019s \u201cWhistleblowing Directive\u201d (Directive 2019/1937). The latter, for example, mandates the establishment of safe reporting channels and protection against retaliation. It also requires EU member states to provide whistleblowers with legal, financial, and psychological support. However, the directive faces criticism for its limitations. Notably, it does not cover all publicsector entities [63, p. 3] and leaves key decisions to member states\u2019 discretion [1, p. 652]. This discretion extends to the absence of mandatory anonymous reporting channels and permits states to disregard cases they consider \u201cclearly minor\u201d, leaving whistleblowers without comprehensive protection for non-material harms like workplace bullying [63, p. 3]. Furthermore, according to White [70], the directive\u2019s sectoral approach and reliance on a list of specific EU laws causes a patchwork of provisions, creating a complex and possibly confusing legal environment, particularly for those sectors impacting human rights and life-and-death situations. Last but not least, organizations often react negatively to whistleblowing due to the stigma of errors, even though recognizing these mistakes would be key to building a culture of responsibility [5, p. 12] and improving organizations and society [69]. The reality for whistleblowers is thus fraught with challenges, from navigating legal uncertainties to dealing with public perception [26, 51, 52], leaving many whistleblowers with no option but to report their findings anonymously [50]. However, \u201canonymous\u201d reporting channels alone do not guarantee anonymity [5]. 2.2 Anonymity, (De-)anonymization, and (De-/Re-)Identification Anonymity is not an alternative between being identified uniquely or not at all, but \u201cthe state of being not identifiable within a set of subjects [with potentially the same attributes], the anonymity set\u201d [46, p.9]. Of the manifold possible approaches towards this goal, state-of-the-art whistleblowing-support software as well as legal protections (where existing) focus on anonymous communications [5]. This, however, does not guarantee anonymous reports. Instead, a whistleblower\u2019s anonymity may still be at risk due to several factors, including: (i) surveillance technology, such as browser cookies, security mechanisms otherwise useful to prevent unauthenticated uses, cameras, or access logs, (ii) the author\u2019s unique writing style, and (iii) the specific content of the message [33]. Berendt and Schiffner [5] refer to the latter as \u201cepistemic non-anonymizability\u201d, i.e., the risk of being identified based on the unique information in a report, particularly when the information is known to only a few individuals. In some cases, this may identify the whistleblower uniquely. Terms and their understanding in the domain of anonymity vary. We use the following nomenclature: anonymization is a modification of data that increases the size of the anonymity set of the person (or other entity) of interest; conversely, de-anonymization decreases it (to some number \ud835\udc58\u22651). De-anonymization to \ud835\udc58= 1, which includes the provision of an identifier (e.g., a proper name), is called re-identification. The removal of some identifying information (e.g., proper names), called de-identification, often but not necessarily leads to anonymization [4, 68]. In structured data, direct identifiers (e.g., names or social security numbers) are unique to an individual, whereas quasi-identifiers like age, gender, or zip code, though not unique on their own, can be combined to form unique patterns. Established mathematical frameworks for quantifying anonymity, such as Differential Privacy (DP) [16], and metrics such as k-anonymity [53], along with their refinements [27, 31], can be used when anonymizing datasets. Unstructured data such as text, which constitutes a vast majority of the world\u2019s data, requires its own safeguarding methods, which fall into two broader categories [28]. The first, NLP-based text sanitization, focuses on linguistic patterns to reduce (re-)identification risk. The second, privacy-preserving data publishing (PPDP), involves methods like noise addition or generalization to comply with pre-defined privacy requirements [15]. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 2.3 Related Work: Text De-Identification and Anonymization, Privacy Models, and Adversarial Stylometry De-identification methods in text sanitization mask identifiers, primarily using named entity recognition (NER) techniques. These methods, largely domain-specific, have been particularly influential in clinical data de-identification, as evidenced, for instance, by the 2014 i2b2/UTHealth shared task [62]. However, they do not or only partially address the risk of indirect re-identification [4, 38]. For example, S\u00e1nchez et al. [55, 56, 57] make the simplifying assumption that replacing noun phrases which are rare in domain-specific corpora or on the web with more general ones offers sufficient protection. Others use recurrent neural networks [12, 30], reinforcement learning [71], support vector machines [65], or pre-trained language models [23] to identify and remove entities that fall into pre-defined categories. However, all of these approaches ignore or significantly underestimate the actual risks of context-based re-identification. More advanced anonymization methods, in turn, also aim to detect and remove identifiers that do not fit into the usual categories of named entities or are hidden within context. For example, Reddy and Knight [49] detect and obfuscate gender, and Adams et al. [2] introduce a human-annotated multilingual corpus containing 24 entity types and a pipeline consisting of NER and co-reference resolution to mask these entities. In a more nuanced approach, Papadopoulou et al. [44] developed a \u201cprivacy-enhanced entity recognizer\u201d that identifies 240 Wikidata properties linked to personal identification. Their approach includes three key measures to evaluate if a noun phrase needs to be masked or replaced by a more general one [43]. The first measure uses RoBERTa [29] to assess how \u201csurprising\u201d an entity is in its context, assuming that more unique entities carry higher privacy risks. The second measure checks if web search results for entity combinations mention the individual in question, indicating potential re-identification risk. Lastly, they use a classifier trained with the Text Anonymization Benchmark (TAB) corpus [48] to predict masking needs based on human annotations. Kleinberg et al.\u2019s [24] \u201cTextwash\u201d employs the BERT model, fine-tuned on a dataset of 3717 articles from the British National Corpus, Enron emails, and Wikipedia. The dataset was annotated with entity tags such as \u201cPERSON_FIRSTNAME\u201d, \u201cLOCATION\u201d, and an \u201cOTHER_IDENTIFYING_ATTRIBUTE\u201d category for indirect reidentification risks, along with a \u201cNONE\u201d category for tokens that are non-re-identifying. A quantitative evaluation (0.93 F1 score for detection accuracy, minimal utility loss in sentiment analysis, and part-of-speech tagging) and its qualitative assessment (82% / 98% success in anonymizing famous / semi-famous individuals) show promise. However, the more recent gpt-3.5-turbo can re-identify 72.6% of the celebrities from Textwash\u2019s qualitative study on the first attempt, highlighting the evolving complexity of mitigating the risk of re-identification in texts [45]. In PPDP, several privacy models for structured data have been adapted for privacy guarantees in text. While most are theoretical [28], \u201cC-sanitise\u201d [54] determines the disclosure risk of a certain term t on a set of entities to protect (C), given background knowledge K, which by default is the probability of an entity co-occurring with a term t in the web. Additionally, DP techniques have been adapted to text, either for generating synthetic texts [20] or for obscuring authorship in text documents [68]. This involves converting text into word embeddings, altering these vectors with DP techniques, and then realigning them to the nearest words in the embedding model [73, 74]. However, \u201cword-level differential privacy\u201d [35] faces challenges: it maintains the original sentence length, limiting variation, and can cause grammatical errors, such as replacing nouns with unrelated adjectives, due to not considering word types. Authorship attribution (AA) systems use stylistic features such as vocabulary, syntax, and grammar to identify an author. State-ofthe-art approaches involve using Support Vector Machines [64, 72], and more recently, fine-tuned LLMs like BertAA [3, 18, 64]. The \u201cValla\u201d benchmark and software package standardizes evaluation methods and includes fifteen diverse datasets [64]. Contrasting this, adversarial stylometry modifies an author\u2019s writing style to reduce AA systems\u2019 effectiveness [61]. Advancements in machine translation [67] have also introduced new methods based on adversarial training [60], though they sometimes struggle with preserving the original text\u2019s meaning. Semi-automated tools, such as \u201cAnonymouth\u201d [36], propose modifications for anonymity in a user\u2019s writing, requiring a significant corpus of the user\u2019s own texts. Moreover, recent advances in automatic paraphrasing using fine-tuned LLMs demonstrated a notable reduction in authorship attribution, but primarily for shorter texts [35]. To the best of our knowledge, there is no \u2013 and maybe there can be no \u2013 complete list of textual features contributing to the reidentification of individuals in text. As Narayanan and Shmatikov [40] highlight, \u201cany attribute can be identifying in combination with others\u201d [p. 3]. In text, we encounter elements like characters, words, and phrases, each carrying varying levels of meaning [19]. Single words convey explicit lexical meaning as defined by a vocabulary (e.g. \u201cemployee\u201d), while multiple words are bound by syntactic rules to express more complex thoughts implicitly in phrases (\u201cyoungest employee\u201d) and sentences (\u201cShe is the youngest employee\u201d). In addition, the European Data Protection Supervisor (EDPS) and Spanish Data Protection Agency (AEPD) [17] state that anonymization can never be fully automated and needs to be \u201ctailored to the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons\u201d [p. 7]. To take these insights and limitations into account, our semiautomated text sanitization tool leverages insights on the removal of identifying information but involves the whistleblower (the user) in the decision-making process. 3 RISK MODELLING AND RISK MITIGATION APPROACH In this section, we derive the problem statement (Section 3.2) from an analysis of different identifier types (Section 3.1). Following an overview of our approach (Section 3.3), we detail the anonymization operations for textual features (Section 3.4) and the automatic assignment of default concern levels (Section 3.5). \fStaufer, et al. 3.1 Identifier Types, Author Identifiability, and Event Details in the Whistleblowing Setting Whistleblowing reports convey information about persons, locations, and other entities. At least some of them need to be identified in order for the report to make any sense. The following fictitious example consists of three possible versions of a report in order to illustrate how different types of identifiers may contribute to the re-identification of the anonymously reporting employee Jane Doe, a member of the Colours and Lacquer group in the company COLOURIFICS. V1 On 24 January 2023, John Smith poured polyurethane resin into the clover-leaf-shaped sink of room R23. V2 After our group meeting on the fourth Tuesday of January 2023, the head of the Colours and Lacquer Group poured a toxin into the sink of room R23. V3 Somebody poured a liquid into a recepticle on some date in a room of the company. In V1, \u201cJohn Smith\u201d is the lexical identifier1 of the COLOURIFICS manager John Smith, as is \u201c24 January 2023\u201d of that date. Like John Smith, room R23 is a unique named entity in the context of the company and also identified lexically. \u201cPolyurethane resin\u201d is the lexical identifiers of a toxin (both are common nouns rather than names of individual instances of their category). The \u201cclover-leaf-shaped\u201d serves as a descriptive identifier of the sink. In V2, John Smith is still identifiable via the descriptive identifier \u201chead of the Colours and Lacquer Group\u201d, at least on 24 January 2023 (reconstructed with the help of a calendar and COLOURIFIC\u2019s personnel files). \u201cOur\u201d group meeting is an indexical identifier that signals that the whistleblower is one of the, say five employees in the Colours and Lacquer Group. The indexical information is explicit in V2 given the background knowledge that only employees in this group were co-present (for example, in the company\u2019s key-card logfiles). The same information may be implicit in V1 (if it can be seen from the company\u2019s organigram who John Smith is and who works in his group). Both versions provide for the inference that Jane Doe or any of her four colleagues must have been the whistleblower. If, in addition, only Jane Doe stayed behind \u201cafter the meeting\u201d, that detail in V2 descriptively identifies her uniquely2. V3 contains only identifiers of very general categories. Many other variants are possible (for example, referencing, in a V4, \u201cthe head of our group\u201d, which would enlarge the search space to all groups that had a meeting in R23 that day). The example illustrates the threats (i)-(iii) of Section 2.2. It also shows that the whistleblower\u2019s \u201canonymity\u201d (or lack thereof) is only one aspect of a more general and graded picture of who and what can be identified directly, indirectly, or not at all \u2013 and what this implies for the whistleblower\u2019s safety as well as for the report\u2019s effectiveness. 1The classification of identifiers is due to Phillips [47]. Note that all types of identifiers can give rise to personal data.. in the sense of the EU\u2019s General Data Protection Regulation (GDPR), Article 4(1): \u201cany information which is related to an identified or identifiable natural person\u201d, or personally identifiable data in the senses used in different US regulations. See [11] for legal aspects in the context of whistleblowing. 2If John Smith knows that only she observed him, she is also uniquely identified in V1, but for the sake of the analysis, we assume that only recorded data/text constitute the available knowledge. Inspired by Domingo-Ferrer\u2019s [14] three types of (data) privacy, we distinguish between the identifiability of the whistleblower Jane Doe (author 3 identifiability \ud835\udc34\ud835\udc56\ud835\udc51) and descriptions of the event or other wrongdoing, including other actors (event details \ud835\udc38\ud835\udc51\ud835\udc61). Given the stated context knowledge, we obtain an anonymity set of size \ud835\udc58= 1 for John Smith in V1 and V2. Jane Doe is in an anonymity set of size \ud835\udc58= 5 or even \ud835\udc58= 1 in V2. In V1, that set may be of size \ud835\udc58= 5 (if people routinely work only within their group) or larger (if they may also join other groups). Thus, the presence of a name does not necessarily entail a larger risk. Both are in an anonymity set containing all the company\u2019s employees at the reported date in V3 (assuming no outsiders have access to company premises). The toxin and the sink may be in a smaller anonymity set in V1 than in V2 or V3, and they could increase further (for example, if only certain employees have access to certain substances). Importantly, the identifiability of people and other entities in \ud835\udc38\ud835\udc51\ud835\udc61can increase the identifiability of the whistleblower. V3 illustrates a further challenge: the misspelled receptacle may be a typical error of a specific employee, and the incorrect placement of the temporal before the spatial information suggests that the writer may be a German or Dutch native speaker. In addition to errors, also correct variants carry information that stylometry can use for authorship attribution, which obviously can have a large effect on \ud835\udc34\ud835\udc56\ud835\udc51. The whistleblower would, on the one hand, want to reduce all such identifiabilities as much as possible. On the other hand, the extreme generalization of V3 creates a meaningless report that neither the company nor a court would follow up on. This general problem can be framed in terms of risk and utility, which will be described next. 3.2 The Whistleblowing Text-Writing Problem: Risk, Utility, And Many Unknowns A potential whistleblower faces the following problem: \u201cmake \ud835\udc34\ud835\udc56\ud835\udc51 as small as possible while retaining as much \ud835\udc38\ud835\udc51\ud835\udc61as necessary\u201d. We propose to address this problem by examining the text and possibly rewriting it. In principle, this is an instance of the oft-claimed trade-off between privacy (or other risk) and utility. In a simple world of known repositories of structured data, one could aim at determining the identifying problem (e.g., by database joins to identify the whistleblower due to some attributive information they reveal about themselves and by multiple joins for dependencies such as managers and teams) and compute how large the resulting anonymity set (or \ud835\udc34\ud835\udc56\ud835\udc51as its inverse) is. Given a well-defined measure of information utility, different points on the trade-off curve would then be welldefined and automatically derivable solutions to a mathematical optimization problem. However, texts offer a myriad of ways to express a given relational information. The space of information that could be crossreferenced, sometimes in multiple steps, is huge and often unknown to the individual. Consequently, in many cases, it is not possible 3We assume that the potential whistleblower is also the author of the report. This is the standard setting. Modifications for the situation in which a trusted third party writes the report on their behalf are the subject of future work. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification to determine the anonymity set size with any mathematical certainty. In addition, setting a threshold could be dangerous: even if the anonymity set is \ud835\udc58> 1, protection is not guaranteed \u2013 for example, the whole department of five people could be fired in retaliation. At the same time, exactly how specific a re-written text needs to be about \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61in order to make the report legally viable 4 cannot be decided without much more context knowledge. For example, the shape of the sink into which a toxic substance is poured probably makes no difference to the illegality, whereas the identity of the substance may affect it. These unknowns have repercussions both for tool design (Section 3.3) and for evaluation design (Section 5.1.1). 3.3 Risk Mitigation Approach and Tool Design: Overview Potential whistleblowers would be ill-served by any fully automated tool that claims to be able to deliver a certain mathematically guaranteed anonymization. Instead, we propose to provide them with a semi-automated tool that does have some \u201canonymity-enhancing defaults\u201d that illustrate with the concrete material how textual elements can be identifying and how they can be rendered less identifying. Our tool starts with the heuristic default assumption that identifiability is potentially always problematic and then lets the user steer our tool by specifying how \u201cconcerning\u201d specific individual elements are and choosing, interactively, the treatment of each of them that appears to give the best combination of \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. By letting the author/user assign these final risk scores in the situated context of the evolving text, we enable them to draw on a maximum of implicit context knowledge. Our approach and tool proceed through several steps. We first determine typical textual elements that can constitute or be part of the different types of identifiers. As can be seen in Table 1, most of them can affect \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. Since identification by name (or, by extension, pronouns that co-reference names) does not even need additional background knowledge and since individuals are more at risk than generics, we classify some textual features as \u201chighly concerning\u201d, others as having \u201cmedium concern\u201d, and the remainder as \u201cpotentially concerning\u201d. We differentiate between two types of proper nouns. Some names refer to typical \u201cnamed entities\u201d, which include, in particular, specific people, places, and organizations, as well as individual dates and currency amounts. These pose particular person-identification risk in whistleblowing scenarios.5 \u201cOther proper nouns\u201d, such as titles of music pieces, books and artworks generally only pose medium risk. For stylometric features, we explicitly categorize outof-vocabulary words, misspelled words, and words that are surprising given the overall topic of the text. Other low-level stylometric features, such as punctuation patterns, average word and sentence length, or word and phrase repetition, are not (and in many cases, such as with character n-gram pattern, cannot be [25]) explicitly identified. Instead, we implicitly/indirectly account for them as a byproduct of the LLM-based rephrasing. For all other parts of 4\u201ca situation in which a plan, contract, or proposal is able to be legally enforced\u201d, https://ludwig.guru/s/legally+viable, retrieved 2024-01-02 5PERSON, GPE (region), LOC (location), EVENT, LAW, LANGUAGE, DATE, TIME, PERCENT, MONEY, QUANTITY, and ORDINAL speech, we propose to use replacement strategies based on dataanonymization operations that are proportional to the risk (Table 2). Given the complexities of natural language and potential context information, the latter two operations are necessarily heuristic; thus, our tool applies the classification and the risk mitigation strategy as a default which can then be adapted by the user. Table 1: Overview of the approach from identifier types to default risk. Identifier Type Textual Feature Aid/Edt Default Risk Lexical Names of named entities \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Lexical Other proper nouns \ud835\udc38\ud835\udc51\ud835\udc61 Medium Indexical Pronouns \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Descriptive Common nouns \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive Modifiers \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive (via pragmatic inferences) Out-of-vocabulary wordsa \ud835\udc34\ud835\udc56\ud835\udc51, (\ud835\udc38\ud835\udc51\ud835\udc61) Medium Misspelled wordsa \ud835\udc34\ud835\udc56\ud835\udc51 Medium Surprising wordsb \ud835\udc34\ud835\udc56\ud835\udc51 Medium Other stylometric features \ud835\udc34\ud835\udc56\ud835\udc51 N/Ac aTreated as noun. bNouns or proper nouns. cNot explicitly specified. Indirectly accounted for through rephrasing. Table 2: Mitigation strategies based on assigned risk (LvC = level of concern, NaNEs = names of named entities, OPNs = other proper nouns, CNs = common nouns, Mods = modifiers, PNs = proper nouns, OSFs = other stylometric features). LvC NaNEs OPNs CNs Mods PNs OSFs High Suppr. Suppr. Suppr. Suppr. Suppr. Pert. Medium Pert. Generl. Generl. Pert. Suppr. Pert. 3.4 Anonymization Operations for Words and Phrases In our sanitization pipeline, we conduct various token removal and replacement operations based on each token\u2019s POS tag and its assigned level of concern (LvC), which can be \u201cpotentially concerning\u201d, \u201cmedium concerning\u201d, or \u201chighly concerning\u201d. Initially, we consider all common nouns, proper nouns, adjectives, adverbs, pronouns, and named entities6 as potentially concerning. Should the user or our automatic LvC estimation (see subsection 3.5) elevate the concern to either medium or high, we apply anonymization operations that are categorized into generalization, perturbation, and suppression. Specific implementation details are elaborated on in section 4. 6By this, we mean names of named entities, e.g. \u201cBerlin\u201d for GPE, but we use named entities instead for consistency with other literature. \fStaufer, et al. 3.4.1 Generalization. The least severe type of operation targets common nouns and other proper nouns marked as medium concerning. We assume their specificity (not necessarily their general meaning) poses re-identification risks. Thus, more general terms can be used to preserve meaning while mitigating the risk of re-identification. \u2022 Common nouns like \u201ccar\u201d are replaced with hypernyms from WordNet, such as \u201cvehicle\u201d. \u2022 Other proper nouns become broader Wikidata terms, e.g. \u201cpolitical slogan\u201d for \u201cMake America Great Again\u201d. 3.4.2 Perturbation. This applies to modifiers7 and named entities annotated as medium concerning. In this process, original words are retained but are assigned zero weight in the paraphrase generation, along with their synonyms and inflections. This approach relies on the LLM to either (a) find similar but non-synonymous replacement words or (b) completely rephrase the sentence to exclude these words. For example, \u201cMicrosoft, the giant tech company, ...\u201d could be paraphrased as \u201cA leading corporation in the technology sector...\u201d. 3.4.3 Suppression. The most severe type of operation is applied to common nouns, other proper nouns, modifiers and named entities annotated as highly concerning, and to pronouns that are either medium concerning or highly concerning. We assume these words are either too unique or cannot be generalized. \u2022 For common nouns and other proper nouns, dependent phrases are omitted (e.g., \u201cWe traveled to the London Bridge in a bus.\u201d becomes \u201cWe traveled in a bus.\u201d). \u2022 Modifiers are removed (e.g., \u201cHe used to be the principal dancer\u201d becomes \u201cHe used to be a dancer\u201d). \u2022 Named entities are replaced with nondescript phrases (e.g., \u201cBarack Obama\u201d becomes \u201ccertain person\u201d). \u2022 Pronouns are replaced with \u201csomebody\u201d (e.g., \u201cHe drove the bus.\u201d becomes \u201cSomebody drove the bus.\u201d). 3.5 Automatic Level of Concern (LvC) Estimation In our whistleblowing context, we deem the detection of outsidedocument LvC via search engine queries, as proposed by Papadopoulou et al. [44] (refer to related work in 2.3), impractical. This is because whistleblowers are typically not well-known, and the information they disclose is often novel, not commonly found on the internet. Therefore, instead of relying on external data, we focus on innerdocument LvC, setting up a rule-based system and allowing users to adjust the LvC based on their contextual knowledge. Further, we assume that this pre-annotation of default concern levels raises awareness for potential sources of re-identification. \u2022 Common nouns and modifiers, by default, are potentially concerning. As fundamental elements in constructing a text\u2019s semantic understanding, they could inadvertently reveal re-identifying details like profession or location. However, without additional context, their LvC is not definitive. \u2022 Other proper nouns, unexpected words, misspelled words and out-of-vocabulary words default to medium 7The current version of our tool considers only adjectives and adverbs as modifiers. concerning. Unlike categorized named entities, other proper nouns only indirectly link to individuals, places, or organizations. Unexpected words may diminish anonymity, according to Papadopoulou et al. [44], while misspelled or out-ofvocabulary words can be strong stylometric indicators. \u2022 Named entities are considered highly concerning by default, as they directly refer to specific entities in the world, like people, organizations, or locations, posing a significant re-identification risk. 4 IMPLEMENTATION Our semi-automated text sanitization tool consists of a sanitization pipeline (Sections 4.1 and 4.2) and a user interface (Section 4.3). The pipeline uses off-the-shelf Python NLP libraries (spaCy, nltk, lemminflect, constituent_treelib, sentence-transformers) and our paraphrasing-tuned FLAN T5 language model. FLAN T5\u2019s errorcorrecting capabilities [39, 42] aid in reconstructing sentence fragments after words or phrases with elevated levels of concern have been removed. The user interface is built with standard HTML, CSS, and JavaScript. Both components are open source and on GitHub8. 4.1 Anonymization Operations for Words and Phrases 4.1.1 Generalization. Common nouns undergo generalization by first retrieving their synsets and hypernyms from WordNet, followed by calculating the cosine similarity of their sentence embeddings with those of the hypernyms. This calculation ranks the hypernyms by semantic similarity to the original word, enabling the selection of the most suitable replacement. By default, we select the closest hypernym. Other proper nouns are generalized as follows: We first query Wikipedia to identify the term, using the allmpnet-base-v2 sentence transformer to disambiguate its meaning through cosine similarity. Next, we find the most relevant Wikidata QID and its associated hierarchy. We then flatten these relationships and replace the entity with the next higher-level term in the hierarchy. 4.1.2 Perturbation. We add randomness to modifiers and named entities through LLM-based paraphrasing, specifically, by using the FLAN-T5 language model, which we fine-tuned for paraphrase generation (Section 4.2). To achieve perturbation9, we give the tokens in question and their synonyms and inflections zero weight during next token prediction. This forces the model to either use a less probable word (controlled by the temperature hyperparameter) or rephrase the sentence to omit the token. Using a LLM for paraphrase generation has the added benefit that it mends fragmented sentences caused by token suppression and yields a neutral writing style, adjustable through the no_repeat_ngram_size hyperparameter. 8https://github.com/dimitristaufer/Semi-Automated-Text-Sanitization 9The strategies \u201csuppression\u201d and \u201cgeneralization\u201d are straightforward adaptations of the classical methods for structured data. Perturbation \u201creplaces original values with new ones by interchanging, adding noise or creating synthetic data\u201d [7]. Interchanging would create ungrammatical texts, and noise can only be added to certain data. We, therefore, generate synthetic data via LLM-Rephrasing, disallowing the highly specific words / terms and their synonyms while producing a new but grammatical text. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 4.1.3 Suppression. Common nouns and other proper nouns are suppressed by removing the longest phrase containing them with the constituent_treelib library. Sentences with just one noun or proper noun are entirely removed. Otherwise, the longest phrase, be it a main clause, verb phrase, prepositional phrase, or noun phrase, is identified, removed, and replaced with an empty string. Modifiers are removed (e.g., \u201cHe is their principal dancer\u201d \u2192\u201cHe is their \u00b7 dancer\u201d). Pronouns are replaced with the static string \u201csomebody\u201d. For example, \u201cHis apple\u201d \u2192\u201cSomebody apple\u201d (after replacement) \u2192\u201cSomebody\u2019s apple\u201d (after paraphrase generation). Named entities are replaced with static phrases based on their type. For example, \u201cJohn Smith sent her 2 Million Euros from his account in Switzerland\u201d \u2192\u201ccertain person sent somebody certain money from somebody account in certain location\u201d (after suppressing pronouns and named entities) \u2192\u201cA certain individual sent a specific amount of money to whoever\u2019s account in some particular place\u201d (after paraphrase generation). 4.2 Paraphrase Generation We fine-tuned two variants of the FLAN T5 language models, FLAN T5Base and FLAN T5XL, using the \u201cchatgpt-paraphrases\u201d dataset, which uniquely combines three large paraphrasing datasets for varied topics and sentence types. It includes question paraphrasing from the \u201cQuora Question Pairs\u201d dataset, context-based paraphrasing from \u201cSQuAD2.0\u201d, and summarization-based paraphrases from the \u201cCNN-DailyMail News Text Summarization\u201d dataset. Furthermore, it was enriched with five diverse paraphrase variants for each sentence pair generated by the gpt-3.5-turbo model, resulting in 6.3 million unique pairs. This diversity enhances our model\u2019s paraphrasing capabilities and reduces overfitting. For training, we employed Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation), which adapts the model to new data without the need for complete retraining. We quantized the model weights to enhance memory efficiency using bitsandbytes. We trained FLAN T5Base on a NVIDIA A10G Tensor Core GPU for one epoch (35.63 hours) on 1 mio. paraphrase pairs, using an initial learning rate of 1e-3. After one epoch, we achieved a minimum Cross Entropy loss of 1.195. FLAN T5XL was trained for one epoch (22.38 hours) on 100,000 pairs and achieved 0.88. For inference, we configure max_length to 512 tokens to cap the output at T5\u2019s tokenization limit. do_sample is set to True, allowing for randomized token selection from the model\u2019s probability distribution, enhancing the variety of paraphrasing. Additionally, parameters like temperature, no_repeat_ngram_size, and length_penalty are adjustable via the user interface, providing control over randomness, repetition avoidance, and text length. 4.3 User Interface Our web-based user interface communicates with the sanitization pipeline via Flask endpoints. It visualizes token LvCs (gray, yellow, red), allows dynamic adjustments of these levels, and starts the sanitization process. Moreover, a responsive side menu allows users to select the model size and tune hyperparameters for paraphrasing. The main window (Figure 1) shows the original and the sanitized texts, with options for editing and annotating. Figure 1: The UI\u2019s main window showing the input text (left) and the sanitized text (right). We made up the input and converted it to \u201cInternet Slang\u201d (https://www.noslang.com/ reverse) to showcase how an extremely obvious writing style is neutralized. 5 EVALUATION We evaluate our tool quantitatively (Sections 5.1 and 5.2) and demonstrate its workings and usefulness with an example from a realworld whistleblower testimony (Section 5.3). They complement each other in that the first focuses on identification via writing style and the second two on identification via content. 5.1 Re-Identification Through Writing Style: IMDb62 Movie Reviews Dataset 5.1.1 Evaluation metrics. The large unknowns of context knowledge imply that evaluations cannot rely on straightforward measurement methods for \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. We, therefore, work with the following proxies. Text-surface similarities To understand the effect of language model size and hyperparameter settings on lexical and syntactic variations from original texts, we utilize two ROUGE scores: ROUGE-L (Longest Common Subsequence) to determine to which extent the overall structure and sequence of information in the text changes. And ROUGE-S (Skip-Bigram) to measure word pair changes and changes in phrasings. Risk Without further assumptions about the (real-world casespecific) background knowledge, it is impossible to exactly quantify the ultimate risk of re-identification (see Section 3.1). We therefore only measure the part of \ud835\udc34\ud835\udc56\ud835\udc51where (a) the context knowledge is more easily circumscribed (texts from the same author) and (b) benchmarks are likely to generalize across case studies: the risk of re-identification based on stylometric features, measured as authorship attribution accuracy (AAA). Utility It is also to be expected that the rewriting reduces \ud835\udc38\ud835\udc51\ud835\udc61, yet again it is impossible to exactly determine (without realworld case-specific background knowledge and legal assessment) whether the detail supplied is sufficient to allow for legal follow-up of the report or even only to create alarm that could then be followed up. We, therefore, measure \ud835\udc38\ud835\udc51\ud835\udc61utility through two proxies: a semantic similarity measure and a sentiment classifier. To estimate semantic similarity (SSim), we calculate the cosine similarity of both texts\u2019 sentence \fStaufer, et al. embeddings using the SentenceTransformer10 Python framework. To determine the absolute sentiment score difference (SSD), we classify the texts\u2019 sentiment using an off-the-shelf BERT-based classifier11 from Hugging Face Hub. All measures are normalized to take on values between 0 and 1, and although the absolute values of the scores between these endpoints (except for authorship attribution) cannot be interpreted directly, the comparison of relative orders and changes will give us a first indication of the impacts of different rewriting strategies on \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. 5.1.2 Data, language models, and settings. We investigate protection against authorship attribution attacks with the popular IMDb62 movie reviews dataset [59], which contains 62,000 movie reviews by 62 distinct authors. We assess AAA using the \u201cValla\u201d software package [64], specifically its two most effective models: one based on character n-grams and the other on BERT. This approach covers both ends of the the authorship attribution spectrum [3], from lowlevel, largely topic-independent character n-grams to the contextrich features of the pre-trained BERT model. The evaluation was conducted on AWS EC2 \u201cg4dn.xlarge\u201d instances with NVIDIA T4 GPUs. We processed 130 movie reviews for each of the 62 authors across twelve FLAN T5 configurations, totaling 96,720 texts with character counts spanning from 184 to 5248. Each review was sanitized with its textual elements assigned their default LvCs (see 3.5). Both model sizes, \u201cBase\u201d (250M parameters) and \u201cXL\u201d (3B parameters) were tested with temperature values T of 0.2, 0.5, and 0.8, as well as with no_repeat_ngram_size (NRNgS) set to 0 or 2. The former, temperature, controls the randomness of the next-word predictions by scaling the logits before applying softmax, which makes the predictions more or less deterministic. For our scenario, this causes smaller or greater perturbation of the original text\u2019s meaning. The latter, NRNgS, disallows n consecutive tokens to be repeated in the generated text, which for our scenario means deviating more or less from the original writing style. The Risk-utility trade-offs of all configurations are compared to three baselines: \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 is the original text. In \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, similar to state-of-the-art related work [24, 44], we only redact named entities by replacing them with placeholders, such as \u201c[PERSON]\u201d and do not utilize our language model. Similarly, in \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523 we only remove named entities but rephrase the texts using our bestperforming model configuration regarding AA protection. 5.1.3 Results. The n-gram-based and BERT-based \u201cValla\u201d classifiers achieved AAA baselines of 98.81% and 98.80%, respectively. As expected, the AAA and text-surface similarities varied significantly depending on the model configuration. The XL-model generated texts with much smaller ROUGE-L and ROUGE-S scores, i.e. more lexical and syntactic deviation from the original texts. Using \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 slightly decreased AAA in all configurations while not significantly affecting semantic similarity, which is why we use this for all the following results. 10all-mpnet-base-v2 11bert-base-multilingual-uncased 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Semantic Similarity (SSim) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (a) Risk-utility trade-off between AAA and SSim. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.00 0.05 0.10 0.15 Sentiment Score Difference (SSD) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (b) Risk-utility trade-off between AAA and SSD. Figure 2: Risk-utility trade-offs. Figure 2 (a) shows the risk-utility trade-off between AAA and SSim. \u201cTop-left\u201d (0,1) would be the fictitious best result. For each model configuration, increasing \ud835\udc47caused AAA to drop but also decreased utility by \u223c8%/4% (BASE/XL) for SSim and \u223c12%/3% (BASE/XL) for SSD. The figure shows that the investigated settings create a trade-off curve, with XL (\ud835\udc47= 0.8, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2) allowing for a large reduction in AAA (to 31.22%, as opposed to the original text \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 of 98.81%), while BASE (\ud835\udc47= 0.2, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 0) retains the most SSim (0.731, as opposed to the original texts, which have \ud835\udc46\ud835\udc46\ud835\udc56\ud835\udc5a= 1 to themselves). Figure 2 (b) shows the risk-utility trade-off between AAA and SSD (the plot shows 1-SSD to retain \u201ctop left\u201d as the optimal point). The results mirror those of AAA-SSim, except for \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522: because only named entities (not considered sentiment-carrying) are removed, the sentiment score changes only minimally. 5.1.4 Discussion. In summary, all our models offer a good compromise between baselines representing state-of-the-art approaches. They have lower risk and higher or comparable utility compared to \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523, where only named entities are removed. This indicates \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification the effectiveness of LLM-based rephrasing in authorship attribution. \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as reflected by low SSim scores. 5.2 Re-Identification Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not differentiating between identifier types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plaintiff (also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56) to measure privacy protection and token-levelweighted precision (\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are.\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure \ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56, we fall back to our previously introduced proxies for measuring \ud835\udc38\ud835\udc51\ud835\udc61utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identifiers, are likely to present none or far fewer of the backgroundknowledge-specific re-identification challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud835\udc34\ud835\udc56\ud835\udc51using \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same influence on \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56as failing to remove \u201cUSA\u201d. Utility We estimate \ud835\udc38\ud835\udc51\ud835\udc61through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and settings. The TAB corpus comprises the first two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and SSim vary slightly, but not significantly for different T values. For T = 0.2, we get an entitylevel recall on quasi-identifiers (\ud835\udc38\ud835\udc45\ud835\udc5e\ud835\udc56) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identifiers \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identifiers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly differently (e.g. \u201cMr Abdisamad Adow Sufi\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model configurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the specific text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it affects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identification Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identifiers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \ud835\udc38\ud835\udc51\ud835\udc61upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identifiability (\ud835\udc34\ud835\udc56\ud835\udc51). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \ud835\udc38\ud835\udc51\ud835\udc61and \ud835\udc34\ud835\udc56\ud835\udc51. Based on the information from the original texts in tables 3 and 4 alone, we define \ud835\udc38\ud835\udc51\ud835\udc61as follows, with \ud835\udc38\ud835\udc51\ud835\udc611, \ud835\udc38\ud835\udc51\ud835\udc612 being a subset of excerpt 1 and \ud835\udc38\ud835\udc51\ud835\udc613 a subset of excerpt 2. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil the e\uffffectiveness of LLM-based rephrasing in authorship attribution. \u232b0B4;8=42, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as re\uffffected by low SSim scores. 5.2 Re-Identi\uffffcation Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not di\ufffferentiating between identi\uffffer types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plainti\uffff(also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\u21e2'38/@8) to measure privacy protection and token-levelweighted precision (, %38+@8) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are., %38+@8 uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure , %38+@8, we fall back to our previously introduced proxies for measuring \u21e23C utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identi\uffffers, are likely to present none or far fewer of the backgroundknowledge-speci\uffffc re-identi\uffffcation challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud43483 using \u21e2'38/@8 and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same in\uffffuence on \u21e2'38/@8 as failing to remove \u201cUSA\u201d. Utility We estimate \u21e23C through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and se\uffffings. The TAB corpus comprises the \uffffrst two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \u21e2'38/@8 and SSim vary slightly, but not signi\uffffcantly for di\ufffferent T values. For T = 0.2, we get an entitylevel recall on quasi-identi\uffffers (\u21e2'@8) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identi\uffffers \u21e2'38 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identi\uffffers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly di\ufffferently (e.g. \u201cMr Abdisamad Adow Su\uffff\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model con\uffffgurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the speci\uffffc text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it a\uffffects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identi\uffffcation Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identi\uffffers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \u21e23C upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identi\uffffability (\ud43483). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \u21e23C and \ud43483. Based on the information from the original texts in tables 3 and 4 alone, we de\uffffne \u21e23C as follows, with \u21e23C1, \u21e23C2 being a subset of excerpt 1 and \u21e23C3 a subset of excerpt 2. \u21e23C := 8 > > > > > > > > < > > > > > > > > : \u201cThe Tax Division approved charges but for no apparent reason changed their decision to a declination.\u201d, \u201cThe declination occurred after signi\uffffcant e\uffffort was put into the investigation by the whistleblower.\u201d, \u201cIn their e\uffffort in doing what is right, the whistleblower su\uffffered on a professional and personal level.\u201d 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our \uffffctitious Ex. 1) or not (as here). 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our fictitious Ex. 1) or not (as here). \fStaufer, et al. In \ud835\udc52\ud835\udc65\ud835\udc501 (Table 3), we classified \u201cjoining the case\u201d (first-person indexical) and implications of a nation-wide investigation as highly concerning. Additionally, we marked all \u201ccase\u201d mentions as highly concerning to evaluate consistent suppression. \u201cDOJ Tax\u201d, being a stylometric identifier because it is no official abbreviation, received a medium LvC, and \u201cthousands of hours\u201d was similarly categorized, potentially indicating the authors role as lead in the case. In \ud835\udc52\ud835\udc65\ud835\udc502 (Table 4), we classified the lexical identifier \u201c2018\u201d, which could be cross-referenced relatively easily, as well as all descriptive identifiers concerning the author\u2019s sexual orientation and outing as highly concerning. Furthermore, emotional descriptors (\u201csleep, vacations, gray hairs, et cetera\u201d) are given medium LvC, similar to references of case investment (\u201cthousands of hours\u201d and \u201c95 percent\u201d), mirroring the approach from \ud835\udc52\ud835\udc65\ud835\udc501. 5.3.3 Results and Discussion. \ud835\udc38\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51retains \ud835\udc38\ud835\udc51\ud835\udc612, but not \ud835\udc38\ud835\udc51\ud835\udc611, as \u201cDOJ Tax\u201d is replaced with \u201cproper noun\u201d due to the nonexistence of a corresponding entity in Wikidata. Consequently, it defaults to the token\u2019s POS tag. For \ud835\udc34\ud835\udc56\ud835\udc51, all identified risks were addressed (e.g., \u201cconsiderable time\u201d replaces \u201cthousands of hours.\u201d). However, the generalization of \u201ccase\u201d led to inconsistent terms like \u201cmatter\u201d, \u201csituation\u201d, and \u201cissue\u201d due to the \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 setting. This is beneficial for reducing authorship attribution accuracy but may confuse readers not familiar with the original context. \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51maintains parts of \ud835\udc38\ud835\udc51\ud835\udc613, though terms like \u201cX amount of time\u201d and \u201cY amount of the investigation\u201d add little value due to their lack of specificity. Notably, \u201camount o of\u201d represents a rare LLM-induced spelling error, underscoring the need for human editing for real-world use. The emotional state\u2019s broad generalization to \u201cphysical health, leisure, grey body covering\u201d is odd and less suitable than a singular term would be. Despite this, \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51 effectively minimizes \ud835\udc34\ud835\udc56\ud835\udc51by addressing all other identified risks. Table 3: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc501 (excerpt 1) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cPrior to joining the case, DOJ Tax had approved tax charges for the case and the case was in the process of progressing towards indictment [...] After working thousands of hours on that captive case, poring over evidence, interviewing witnesses all over the U.S., the decision was made by DOJ Tax to change the approval to a declination and not charge the case.\u201d Lexical IDs: DOJ Tax; U.S. Indexical IDs: [implicit: me] joining the case (first person) Descriptive IDs: interviewing witnesses all over the U.S. (nationwide investigation); thousands of hours (author involvement) Sanitized: \u201cThe proper noun had approved tax charges for the matter and the situation was moving towards indictment, but after spending considerable time on that captive matter, poring over evidence, the decision was made by proper noun to defer the approval and not charge the issue.\u201d Table 4: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc502 (excerpt 2) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cI had opened this investigation in 2018, have spent thousands of hours on the case, worked to complete 95 percent of the investigation, have sacrificed sleep, vacations, gray hairs, et cetera. My husband and I, in identifying me as the case agent, were both publicly outed and ridiculed on social media due to our sexual orientation.\u201d Lexical IDs: 2018; thousands of hours; 95 percent Indexical IDs: me as the case agent (role of author); My husband (author\u2019s marital status) Descriptive IDs: I had opened this investigation in 2018 (can be cross-referenced); My husband and I + publicly outed and ridiculed [...] due to our sexual orientation (author\u2019s sexual orientation and public event); sacrificed sleep, [...], gray hairs (emotional state) Sanitized: \u201cI had opened this investigation on a certain date, had spent X amount of time on the case, worked to complete Y amount of the investigation, sacrificing my physical health, leisure, grey body covering, etc.\u201d 6" + } + ], + "Frank Pallas": [ + { + "url": "http://arxiv.org/abs/2110.15650v1", + "title": "RedCASTLE: Practically Applicable $k_s$-Anonymity for IoT Streaming Data at the Edge in Node-RED", + "abstract": "In this paper, we present RedCASTLE, a practically applicable solution for\nEdge-based $k_s$-anonymization of IoT streaming data in Node-RED. RedCASTLE\nbuilds upon a pre-existing, rudimentary implementation of the CASTLE algorithm\nand significantly extends it with functionalities indispensable for real-world\nIoT scenarios. In addition, RedCASTLE provides an abstraction layer for\nsmoothly integrating $k_s$-anonymization into Node-RED, a visually programmable\nmiddleware for streaming dataflows widely used in Edge-based IoT scenarios.\nLast but not least, RedCASTLE also provides further capabilities for basic\ninformation reduction that complement $k_s$-anonymization in the\nprivacy-friendly implementation of usecases involving IoT streaming data. A\npreliminary performance assessment finds that RedCASTLE comes with reasonable\noverheads and demonstrates its practical viability.", + "authors": "Frank Pallas, Julian Legler, Niklas Amslgruber, Elias Gr\u00fcnewald", + "published": "2021-10-29", + "updated": "2021-10-29", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR" + ], + "main_content": "INTRODUCTION Ensuring privacy is one of the most important challenges when implementing real-world IoT scenarios such as building automation, connected cities, intelligent energy grids, or smart healthcare [19]. In all these and many further cases, the data collected and processed may reveal personal and sometimes highly sensitive information in a multitude of ways. Regulatory provisions such as the GDPR as well as users\u2019 demands to protect their privacy therefore often require to minimize the level of detail at which continuously flowing IoT data such as sensor measurements or observed events are processed. At the same time, respective data must still be detailed enough to facilitate intended functionalities. For balancing these often diverging goals, advanced anonymization techniques ensuring properties such as \ud835\udc58-anonymity [17], \u2113-diversity [8], or \ud835\udc61-closeness [5] have been established. Algorithms and respective implementations for ensuring these are publicly available. However, their practical adoption in real-world IoT scenarios is currently hindered by at least two shortcomings: First, most algorithms and implementations focus on anonymizing persistent datasets stored in and retrieved from, for instance, a database while IoT scenarios strongly rest upon streaming data that are processed \u201con the fly\u201d. Ensuring above-mentioned privacy properties for data streams, in turn, has seen significantly less coverage in the scientific discourse so far [2, 15]. Second, and similarly important, existing implementations are rather rudimentary and hardly address questions of integration into established tools and solutions employed in practice. This is particularly true for combining anonymization measures with Edge-computing approaches, which have also been proposed for privacy-friendly designs of IoT scenarios [13]. Without such integration, however, respective anonymization techniques will hardly be adopted in practice. To address these challenges, we herein propose RedCASTLE, an easily integratable and flexibly configurable anonymization extension to the widely used IoT middleware Node-RED. RedCASTLE allows to implement a broad variety of information reduction approaches as well as to ensure \ud835\udc58\ud835\udc60-anonymity via the established CASTLE algorithm [2] for IoT data streams with minimum integration effort. In particular, RedCASTLE comprises: \u2022 a set of configurable functions for basic information reduction, such as attribute suppression, data filtering, and data mapping, \u2022 an extension of the pre-existing CASTLEGUARD library to facilitate the \ud835\udc58-anonymization of actual data streams with numerical and non-numerical data, and \u2022 a practically applicable and easily adoptable extension that coherently integrates said functionality into Node-RED, a highly interoperable middleware for streaming dataflows widely established for IoTand Edge-usecases. The provided extension is publicly available under an open source license.1 Our work builds upon a pre-existing, rudimentary implementation, CASTLEGUARD [15]. So far, however, CASTLEGUARD is significantly limited in matters of practical applicability and lacks, for instance, connectivity to real streaming data sources (instead of merely simulating them by reading a .csv-file line by line), 1RedCASTLE is available under the MIT license at https://github.com/ PrivacyEngineering/RedCASTLE. arXiv:2110.15650v1 [cs.CR] 29 Oct 2021 \fM4IoT\u201921, December 6\u201310, 2021, Virtual Event, Canada Pallas, Legler, Amslgruber, and Gr\u00fcnewald capabilities to handle non-numerical data, or integration into realworld pipelines. We herein address these limitations and thereby provide \u2013 to the best of our knowledge \u2013 the first solution for Edge-based \ud835\udc58-anonymization of streaming data that is practically applicable in real-world settings. Our considerations and contributions unfold as follows: In section 2, we provide relevant background knowledge and related work. On this basis, we depict our general integration approach (section 3) as well as our newly introduced functionalities for basic information reduction (section 4) and \ud835\udc58\ud835\udc60-anonymization (section 5) in Node-RED. A preliminary performance assessment of our solution is provided in section 6, section 7 concludes. 2 BACKGROUND & RELATED WORK In this section, we begin with the relevant background on IoT and the anonymization of streaming data in the Node-RED middleware. 2.1 IoT, Edge, and the Role of Streaming Data Internet-of-Things (IoT) scenarios arise in a broad variety of applications, from building automation [6], connected cities [7], or energy grids [16] to smart healthcare [10]. All these scenarios rest upon vast amounts of status and measurement data being collected, processed, integrated, and acted upon in a timely manner. After initial trends towards centralized, often cloud-based architectures where data are sent back and forth between the place of collection and effectuation (e.g., a smart meter collecting consumption data and controlling the charging behavior of an electric vehicle) on the one and a centralized processing pipeline on the other hand, recent developments increasingly recognize the need for more decentralized architectures. This is particularly driven by growing amounts of data conflicting with limited bandwidths and processing capacities and by near-real-time requirements of certain IoT usecases conflicting with inevitable network latencies of cloud-centric approaches. In Edge and Fog computing [1] models, parts of the data processing are therefore carried out closer to the points of collection and effectuation. This allows to filter, aggregate, and otherwise preprocess data before forwarding them to upstream services and to implement significant parts of the functionality locally. Especially for continuous streams of measurement data and events from large numbers of sensors and devices, this significantly decreases the amount of data to be transferred as well as round-trip latencies between an event occurring and the respective response being carried out. Besides such possible benefits in matters of performance, Edge and Fog computing may, last but not least, also serve as enabling technology for more privacy-friendly implementations of IoT-scenarios through patterns such as early filtering, aggregation, or anonymization [13]. 2.2 Anonymization for Streaming Data Data anonymization is one of the most fundamental techniques for implementing privacy-friendly systems. Na\u00efve approaches for doing so, however, pose the risk of re-identifiablity of individuals through so-called quasi-identifiers [17] and, thus, the factual disclosure of personal data. To avoid such risks, advanced anonymization schemes and measures like \ud835\udc58-anonymity [17], \u2113-diversity [8] or \ud835\udc61-closeness [5] have been established. Respective approaches are, though, designed with rather static datasets in mind and do not fit the givens and requirements arising in the context of IoT streaming data [9]. Besides the relatively slow anonymization process, which conflicts with near-real-time requirements in IoT usecases [2], this is particularly the case because the underlying assumptions do not hold for IoT streaming data. Instead, appropriately adapted anonymization models are required. Focusing on \ud835\udc58-anonymity, a suitable model is model is \ud835\udc58\ud835\udc60-anonymity, as implemented in the CASTLE algorithm [2]. Here, arriving streaming data is assigned to different clusters based on automatically generalized values for a manually defined set of numerical quasi-identifiers. For instance, the algorithm may determine four different value-ranges for a quasi-identifier \u201cvendor-id\u201d and six value-ranges for a quasi-identifier \u201cstation-id\u201d in an electric vehicle charging use case. All messages with similar combinations of sogeneralized quasi-identifiers are then combined into one cluster.2 Every cluster is then considered to be\ud835\udc58\ud835\udc60-anonymous if it contains at least \ud835\udc58values. If clusters cannot be made \ud835\udc58\ud835\udc60-anonymous, they will be merged with other clusters. When new data arrives and does not fit into an existing cluster, the closest cluster gets enlarged (but only if it is not already \ud835\udc58\ud835\udc60-anonymous) or a new cluster is created. Compared to other adopted models such as FAANST [18] or K-VARP [9], CASTLE has seen the strongest recognition in the scientific discourse. In addition, a continuously maintained reference implementation is available as part of the CASTLEGUARD library [15]. We therefore chose CASTLE as the basis for our streaming data anonymization component. 2.3 Node-RED Node-RED is best described as a visually programmable middleware for streaming dataflows. It supports a broad variety of interfaces for data inand e-gress and is widely used in several industries for implementing complex and dynamically adaptable IoT data flows.3 Given its low footprint, Node-RED is particularly suitable and advocated for usecases involving above-mentioned Edge-based preprocessing of IoT streaming data. Within Node-RED, all functionalities are provided via so-called nodes which are dynamically linked through wires into flows. Messages are brought into a flow via special input nodes, which exist for a broad variety of data sources such as MQTT channels, message buses or mesh networks like KNX or ZWave, or even low-level UDP datagrams. Similarly, messages can be published via output nodes, which can again represent an MQTT channel, an HTTP call to be performed, etc. Between input and output, messages are processed and may be transformed in function nodes of different kinds. In all three categories, available node types are manifold and the library is continuously extended.4 The broad spectrum of available node types notwithstanding, anonymization capabilities \u2013 especially following advanced schemes 2Details on quasi-identifiers, their importance in the context of anonymization, etc. had to be left out due to space constraints. For more in-depth elaborations, see [17]. 3Existing large-scale industrial applications of NodeRED mentioned in developer forums include, for instance, medical settings, energy provision, or integration of industrial PLCs. See https://discourse.nodered.org/t/node-red-at-enterprise-level/11205/8 4For the all available node types, see https://flows.nodered.org/search?type=node. \fRedCASTLE: Practically Applicable \ud835\udc58\ud835\udc60-Anonymity for IoT Streaming Data at the Edge in Node-RED M4IoT\u201921, December 6\u201310, 2021, Virtual Event, Canada like \ud835\udc58\ud835\udc60-anonymity \u2013 are currently lacking in the Node-RED ecosystem. This particularly hinders the adoption of Node-RED in privacysensitive usecases or requires separate, non-integrated measures like anonymization proxies to be added between a Node-RED instance and any upstream service. Both options would come with significant downsides in matters of implementable usecases, increased efforts, or performance drops. Instead, we thus propose to integrate advanced anonymization capabilities directly into Node-RED. 3 INTEGRATING ANONYMIZATION INTO NODE-RED In line with other endeavors for practically applicable privacy engineering [3, 4, 14], our solution shall not only provide the functionality to \ud835\udc58\ud835\udc60-anonymize data but also fulfill further nonfunctional requirements such as coherently integrating with established toolchains and respective application patterns or raising low integration effort. In this vein, RedCASTLE is provided as a self-contained extension to Node-RED that encapsulates the underlying functionality of CASTLEGUARD and makes it available through a custom function node that natively integrates and can be used in flows like any other function node. Similarly, functionalities for basic information reduction are also made accessible in a separate class of function nodes. Thereby, RedCASTLE decouples the anonymization functionality as far as possible from Node-RED\u2019s core and ensures future-proofness. As CASTLEGUARD is implemented in Python while Node-RED requires custom nodes to be written in JavaScript, messages are exchanged between these subcomponents via a low-overhead, brokerless local ZeroMQ message queue. Based on these building blocks, \ud835\udc58\ud835\udc60-anonymization of IoT streaming data can be implemented within Node-RED in line with its common visual programming paradigm and respectively established patterns and practices as follows (see figure 1). Ingress. The messages to be anonymized enter a flow through any kind of input node available in Node-RED, ensuring maximum flexibility and interoperability. A quite common usecase might here be an MQTT-input that subscribes to one or multiple channels. Information Reduction. Even though not necessarily required for \ud835\udc58\ud835\udc60-anonymization, performing basic information reductions \u2013 attribute suppression, filtering, mappings \u2013 on the messages beforehand allows to eliminate unnecessary but possibly privacy-sensitive attributes, reduces complexity for the subsequent step, and also helps rendering initially unfitting data suitable for automated generalization (e.g., when mapping a discrete vehicle model string to a numerical price parameter in a smart charging scenario). In addition, it may help reduce the bandwidth required for forwarding messages from an Edge-node to upstream services afterwards. Such reduction is done in an information reduction node that is added to the flow and wired to the input node. Available reductions as well as configuration parameters are laid out in section 4 below. MQTT in MQTT out PreProcessing \ud835\udc24\ud835\udc2cAnonymizer PostProcessing CASTLEGUARD CASTLE MQTT Broker Clients Clients Clients Node-RED ZeroMQ ZeroMQ 1 2 3 4 5 6 7 0 Figure 1: \ud835\udc58\ud835\udc60-anonymization process in RedCASTLE. \ud835\udc58\ud835\udc60-Anonymization. The actual generalization and clustering of messages according to the \ud835\udc58\ud835\udc60-anonymization model laid out above (see section 2.2) is done with a separate CASTLEGUARD node. This node abstracts away the complex functionality of the underlying component (as well as respective inter-component communication) and is simply wired to the information reduction node and, thus, fed with pre-processed messages. Again, available functionalities and respective configuration parameters are laid out in more detail below in section 5. As soon as a cluster fulfills the \ud835\udc58\ud835\udc60-criterion, respective messages are bulk-released by the CASTLEGUARD node. Re-Publishing. To forward anonymized data to upstream services outside of Node-RED like a cloud-based processing pipeline or subsequent Edge-local components, a respective output node is wired to the CASTLEGUARD node. Again, a quite common usecase might here be an MQTT-output publishing respective messages via an external broker. For doing so, the CASTLEGUARD node only needs to be wired to any of the output nodes available in Node-RED. Of course, additional function nodes can be inserted at any stage of this basic flow: Before generalization takes place, before the actual \ud835\udc58\ud835\udc60-anonymization, or after the CASTLEGUARD-step. Similarly, some use cases might only use an information reduction node and go without \ud835\udc58\ud835\udc60-anonymization or vice versa. This way, our separated nodes integrate well into larger, more complex flows, allowing to flexibly work with anonymized data in Node-RED. 4 BASIC INFORMATION REDUCTION For basic data reduction, RedCASTLE provides the following message manipulations that can all be configured by attaching the configuration to the specific message5: Attribute Suppression. Not all parameters of incoming messages may be relevant for the dataflow to be carried out. At the same time, removing certain attributes (such as individual identifiers) from messages may provide benefits in matters of privacy and/or required bandwidth. RedCASTLE therefore allows to specify names for those attributes that are to be stripped from every message that passes a data reduction node. For this aim, a suppress properties node allows to remove attributes from messages accordingly. 5For the specific syntax to be used for this and all subsequently described configurations, see https://github.com/PrivacyEngineering/RedCASTLE. \fM4IoT\u201921, December 6\u201310, 2021, Virtual Event, Canada Pallas, Legler, Amslgruber, and Gr\u00fcnewald Filters. Besides suppressing single attributes, messages can also be filtered out completely based on different conditions. In particular, RedCASTLE implements attribute-driven allowand disallow filters. This does, for instance, completely drop all messages with an objectID-value included in a provided set of disallowed IDs. In addition, with a range-filter numerical value ranges for message attributes can be specified. In this case, all messages with the respective values being outside the respective range are dropped. Conditional Changes. Conditional changes basically allow to manipulate message data depending on conditions being matched or not. RedCASTLE allows to add or change the value of an attribute changeAttributeName, either on the basis of a string being matched or based on numerical value ranges. This allows, for instance, to implement above-mentioned mapping functionality: whenever a parameter vehicle-model matches a particular string, a new numerical parameter vehicle-price may be set. Similarly, numerical price ranges may also be explicitly mapped to (numbered or string-named) price-categories. Except range-based ones, all these reduction functions can be used with numerical and non-numerical attributes. Additional functionalities might be added in the future, but with suppression, filtering, and conditional changes, RedCASTLE already provides the most relevant capabilities for information reduction on continuously flowing messages. 5 NODE-RED-ADOPTED \ud835\udc58\ud835\udc60-ANONYMIZATION Providing practically valuable\ud835\udc58\ud835\udc60-anonymization in Node-RED based in the pre-existing CASTLEGUARD implementation required several extensions to be made. In particular, this regards the previous lack of suitable integration interfaces as well as missing support for non-numerical data. 5.1 Integration Interface First and foremost, we added actual streaming data interfaces as inand outputs to CASTLEGUARD. Before, CASTLEGUARD only accepted .csv-files as inputs and printed \ud835\udc58\ud835\udc60-anonymized outputs to the command-line, severely limiting its practical use for real-world scenarios. We therefore extended CASTLEGUARD by a lightweight ZeroMQ interface allowing for a coherent and low-overhead integration and message exchange with Node-RED. This interface is employed by our above-mentioned abstracting RedCASTLE function node, which basically receives a message within the Node-RED context, ensures that the modified CASTLEGUARD process is running, and forwards the message \u201cas-is\u201d to this process via said message queue. Similarly, whenever messages are bulk-released by CASTLEGUARD, this is also done via a second ZeroMQ interface listened to by the RedCASTLE function node. All respective, \ud835\udc58\ud835\udc60-anonymized messages are then released by the function node and forwarded and processed within Node-RED as usual. Once RedCASTLE is installed, all this works seamlessly and automatically, without requiring any further configuration etc. 5.2 Non-Numerical Data In addition, we also extended CASTELGUARD itself to provide advanced functionality for handling non-numerical data by automatically converting them to numerical categories. Non-numerical data could so far not be handled at all by the pre-existing implementation. Given that in real-world IoT scenarios, message attributes are non-numerical (e.g., string-based) quite often and that these attributes (such as, for instance, a vehicle model) might be relevant quasi-identifiers, this significantly limits practical applicability. To at least partially close this gap, we extended CASTLEGUARD with basic capabilities for handling non-numerical message attributes and for incorporating them in the \ud835\udc58\ud835\udc60-anonymization process, including automated categorization. For this purpose, non-categorized-attributes can be specified in RedCASTLE\u2019s configuration. Whenever a so far unseen value is detected for one of these attributes, it is assigned to a new numerical category ID and replaced in the message accordingly. A previously seen value, in turn, is replaced with the previously determined category so that, e.g., all occurences of a vehicle-model \u201ce-tron 55\u201d are replaced with the same category ID. In CASTLEGUARD\u2019s \ud835\udc58\ud835\udc60-anonymization procedure, these categories are then treated specifically. As there is no natural ordering of category IDs or, respectively, no semantic meaning embodied in their ordering, grouping them based on value ranges would not have made sense. Instead, categories are treated as sets in our extended implementation. Clusters may then be created independently from the category ID ordering and without generalizing them so that, for instance, one cluster may comprise the IDs {3, 6} and the other one {1, 2, 4, 9}. Consequently, instead of min-max-ranges, a list of all category IDs inside a cluster is also placed into the output. 5.3 Anonymization Parameters Besides above-mentioned extensions and adaptations, we also made the underlying \ud835\udc58\ud835\udc60-anonymization procedure highly configurable. Parameters are set in a JSON configuration file and can be divided into algorithmand dataset-related ones. Algorithm-specific parameters. In this group, parameters that control the \ud835\udc58\ud835\udc60-anonymization procedure can be configured. This includes the k for the \ud835\udc58\ud835\udc60-anonymity, the maximum amount of tuples delta, the maximum allowed active clusters beta and the configuration parameter mu for controlling the maximum information loss. Dataset-specific parameters. For the algorithm to work correctly, some information has to be specified in this parameters group. The sensitive attribute has to be set as well as the quasi identifiers and, if existing, the identifier attribute. The attributes to be interpreted as non-numerical values as described in 5.2 are also specified here. 6 PRELIMINARY PERFORMANCE ASSESSMENT For validating at least the basic viability of our approach in matters of expectable overheads and to preclude being on a fundamentally flawed path, we conducted a set of preliminary performance assessments. In line with established best-practices for securityand privacyrelated performance benchmarks [11, 12], we deployed 3 mediumsized n2-standard-2 Google Cloud instances to separate different components from each other. The first instance is used for the \fRedCASTLE: Practically Applicable \ud835\udc58\ud835\udc60-Anonymity for IoT Streaming Data at the Edge in Node-RED M4IoT\u201921, December 6\u201310, 2021, Virtual Event, Canada 0 5 10 15 Messages per second Latency overhead in seconds 15 30 60 80 max + + + + + Figure 2: Latency overhead induced by RedCASTLE\u2019s \ud835\udc58\ud835\udc60anonymization for different message frequencies (median, 25th and 75th percentile, and 1.5 times the interquartile range, outliers represented by dots, means by crosses). MQTT broker, the second is running the data emulator (the \u201cbenchmarking client\u201d) and the last one is running the actual system under test, the Node-RED server with RedCASTLE. To minimize external impact, all servers are created in the same availability zone and placed within a Virtual Private Cloud Network. Based on this general setting, we benchmarked 1) the additional delay introduced by RedCASTLE\u2019s \ud835\udc58\ud835\udc60-anonymization and 2) the difference in matters of achievable message throughput with and without RedCASTLE being used. Benchmarks were conducted using a realistic dataset of electric vehicle charging events provided by the city of Boulder, Colorado.6 To spice up the dataset, several fake persons with specific vehicle models and unique IDs were used to enrich the original dataset. We chose a realistic \ud835\udc58= 5 for our initial assessments, all other anonymization parameters (see 5.3) were kept at their default. All CPUand network loads were constantly monitored during benchmark runs and stayed \u2013 with one exception, see below \u2013 within ranges ensuring we actually benchmarked what we intended to. Message delay. Given the clustering approach behind \ud835\udc58\ud835\udc60-anonymity laid out above, messages are not immediately propagated through the message-flow defined in Node-RED but rather collected in a cluster until enough messages are present for successfully \ud835\udc58\ud835\udc60anonymizing them. This necessarily implies a delay of message delivery which can be expected to be higher for lower message frequencies. We therefore determined the additional delays induced for 15, 30, 60, 80, and the highest possible amount of messages per second. In line with our expectations, the medium delay per individual message as well as the observed deviations from this value decrease significantly with higher message frequencies, with the mean delay stabilizing around 1-2 seconds in our chosen scenario (see Fig. 2). For many real-world usecases employing privacy-sensitive IoT streaming data, these results appear to be reasonable and acceptable. From 80 messages/s onward, however, the CPU load of one core increased significantly and also resulted in slight latency increases. 6\"Electric Vehicle Charging Station Energy Consumption\", https://open-data. bouldercolorado.gov/datasets/183adc24880b41c4be9fd6a14eb6165f_0/explore Figure 3: Measured maximum throughput with and without \ud835\udc58\ud835\udc60-anonymization component (moving 1-minute window) Given Python\u2019s single-thread characteristics, this perfectly resembles RedCASTLE\u2019s expectable behavior and vividly illustrates the computational complexity behind \ud835\udc58\ud835\udc60-anonymization. Maximum throughput. Besides the delay necessarily introduced, our \ud835\udc58\ud835\udc60-anonymization mechanism expectably also has an impact on the achievable message throughput, which we determined by letting our benchmarking client fire as many messages as possible. This resulted in a relatively stable average throughput around 90 messages/s with RedCASTLE\u2019s \ud835\udc58\ud835\udc60-anonymization integrated into the Node-RED flow (see figure 3). With \ud835\udc58\ud835\udc60-anonymization being skipped, we were surprisingly no longer able to saturate the NodeRED instance: Message throughput in this case reached around 230 messages/s with neither the Node-RED instance nor the two other ones reaching a CPU load above 20%. Network interfaces were also far from operating at full capacity. This points towards a so far unidentified bottleneck in Node-RED. On the one hand, this clearly indicates a strong need for further investigations to identify the actual bottleneck limiting message throughput. On the other, more pragmatically speaking one, the observed limitations are what Node-RED users are currently left with in the employed scenario, no matter what the bottleneck actually is. From this perspective, RedCASTLE\u2019s \ud835\udc58\ud835\udc60-anonymization reduces achievable message throughput by roughly 60% \u2013 a significant overhead that will nonetheless be deemed reasonable in many real-world usecases and will also relativize with more complex and computationally intensive message flows being implemented at the Edge around RedCASTLE\u2019s anonymization. Both the induced latencies and the throughput reduction do, finally, appear in a different light when taking into account that the anonymization functionality provided by RedCASTLE is indispensable for the lawful implementation of many real-world usecases involving privacy-sensitive IoT streaming data. When seen as such an enabling technology, the additional benefits that can be generated from respective usecase implementations will in most cases clearly outweigh or justify the observed overheads. Altogether, our initial performance assessments thus suggest non-negligible but still bearable overheads to result from applying RedCASTLE in realworld usecases. More in-depth investigations \u2013 covering different values for \ud835\udc58, more complex Node-RED flows and trying to pinpoint the observed bottleneck \u2013 are nonetheless necessary for getting a more comprehensive picture in the future. By and large, however, \fM4IoT\u201921, December 6\u201310, 2021, Virtual Event, Canada Pallas, Legler, Amslgruber, and Gr\u00fcnewald RedCASTLE appears to be a practically viable approach for implementing indispensable anonymization functionality in Edge-based streaming data processing. 7" + } + ], + "Bettina Berendt": [ + { + "url": "http://arxiv.org/abs/1810.12847v2", + "title": "AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing", + "abstract": "Recently, many AI researchers and practitioners have embarked on research\nvisions that involve doing AI for \"Good\". This is part of a general drive\ntowards infusing AI research and practice with ethical thinking. One frequent\ntheme in current ethical guidelines is the requirement that AI be good for all,\nor: contribute to the Common Good. But what is the Common Good, and is it\nenough to want to be good? Via four lead questions, I will illustrate\nchallenges and pitfalls when determining, from an AI point of view, what the\nCommon Good is and how it can be enhanced by AI. The questions are: What is the\nproblem / What is a problem?, Who defines the problem?, What is the role of\nknowledge?, and What are important side effects and dynamics? The illustration\nwill use an example from the domain of \"AI for Social Good\", more specifically\n\"Data Science for Social Good\". Even if the importance of these questions may\nbe known at an abstract level, they do not get asked sufficiently in practice,\nas shown by an exploratory study of 99 contributions to recent conferences in\nthe field. Turning these challenges and pitfalls into a positive\nrecommendation, as a conclusion I will draw on another characteristic of\ncomputer-science thinking and practice to make these impediments visible and\nattenuate them: \"attacks\" as a method for improving design. This results in the\nproposal of ethics pen-testing as a method for helping AI designs to better\ncontribute to the Common Good.", + "authors": "Bettina Berendt", + "published": "2018-10-30", + "updated": "2018-11-01", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CY" + ], + "main_content": "Introduction Arti\ufb01cial Intelligence (AI) is currently experiencing another \u201csummer\u201d in terms of perceived promises and economic growth. At the same time, there are widespread debates around AI\u2019s perceived risks and negative impacts. In response to the latter, AI researchers and practitioners are paying increasing attention to existing ethics codes, and they are drafting new ones. In addition, many have embarked on research programs that explore how to do AI \u201cfor Good\u201d. These two reactions are linked, at a high level, by the understanding that the goal of ethics codes is to encourage and ensure \u201cethical\u201d professional conduct in the sense of this conduct being \u201cmorally good or correct\u201d and \u201cavoiding activities [...] that do harm to people or the environment\u201d.1 2 In addition to the goal to do \u201cgood\u201d, many current ethics codes and discussions go further and require that AI contribute to the Common Good. This term is not uniquely (and in many publications not at all) de\ufb01ned, but can be understood as the aim to be good for all. The purpose of the current article is to investigate more closely the notion of AI for the Common Good by drawing on a wider literature, and to start a deeper discussion in the AI community about this 1 \fAI for the Common Good?! 2 goal and the way towards it. Towards this purpose, I invite researchers and practitioners to ask four re\ufb02ective questions of their research practices and projects. These questions can be used as provocations: interruptions of the \ufb02ow of everyday practices designed to \u201cinitiate critical re\ufb02ection [...] on issues that are often otherwise overlooked, obscured or accepted as naturalised practice\u201d [1, p. 225], see [2] for the use of provocations to encourage re\ufb02ection on big data. The article is structured as follows. The Common Good is a notion predating AI. I will start from the de\ufb01nitions given in various AI ethics codes of the Common Good and related notions, and draw on selected discussions in political philosophy for deriving questions about these de\ufb01nitions and their operationalization for AI. These de\ufb01nitions and questions are the subject of Section 2. Section 3 will provide de\ufb01nitions of other key terms used in the article, including AI, data science, and knowledge. The very general term \u201cAI\u201d will be used to denote research and projects that involve the processing and analysis of knowledge and data, often with machine learning / data mining methods. This interpretation corresponds to the strong representation of data science projects at least in the \u201cAI for Social Good\u201d literature, see Section 5.2. Speci\ufb01c references to data science and machine learning / data mining will be made when necessary. Contributing to the Common Good is an ambitious and noble aim, and I am convinced that it inspires many researchers and practitioners to act in responsible ways. However, as I will argue in this paper, even with the best of intentions, certain characteristics of AI thinking and practice, coupled with the inherent need to act in politically charged environments, may impede \u2018design for the Common Good\u2019. To explain why, Section 4 will detail four speci\ufb01c characteristics, summarized into four lead questions: the problemsolving and solutionism mindset of the engineer, the di\ufb03culties of integrating di\ufb00erent stakeholders, the role of knowledge, and side e\ufb00ects and dynamics. Section 5 will validate the importance of the four lead questions via an exploratory survey of 99 contributions to recent conferences on AI and Data Science \u201cfor Social Good\u201d or \u201cfor Good\u201d, the notions that are currently most similar to AI for the Common Good and that are su\ufb03ciently established to have formed conferences. Turning these challenges into a positive recommendation, the concluding Section 6 will draw on another characteristic of computer-science thinking and practice to make the impediments visible and attenuate them: \u201cattacks\u201d as a method for improving design. In analogy with penetration attacks, I will propose ethics pen-testing as a method for helping AI designs to better contribute to the Common Good. Further, I will argue why the arguments put forward here are characteristic of and relevant for AI and for the goal of enhancing the Common Good, but not restricted to the \ufb01eld or the goal. 2 What is the Common Good? 2.1 The Common Good as a goal for AI The ambition to be good for all (or at least many) people has become prominent throughout computer science in general and AI in particular. Some examples can be found in ethics codes: \u2022 ACM Code of Ethics and Professional Conduct [3]: \u201c1.1 Contribute to society and human wellbeing. This principle concerning the quality of life of all people a\ufb03rms an obligation to protect fundamental human rights and to respect the diversity of all cultures.\u201d \u2022 Asilomar Principles [4]: \u201c23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the bene\ufb01t of all humanity rather than one state or organization.\u201d \u201c14) Shared Bene\ufb01t: AI technologies should bene\ufb01t and empower as many people as possible.\u201d \u201c15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to bene\ufb01t all of humanity.\u201d \fAI for the Common Good?! 3 \u2022 Similar ideas are implicit in IEEE Ethically Aligned Design [5, p. 5]: the goal to \u201cdevelop successful autonomous intelligent systems that will bene\ufb01t society\u201d and the second General Principle, to \u201cPrioritize the maximum bene\ufb01t to humanity and the natural environment.\u201d The \ufb01rst thing to note in these di\ufb00erent principles is how di\ufb00erently collectives are referred to. They range from \u2018not all of the bene\ufb01ts should accrue to giant internet companies\u2019 (\u201crather than one state or organization\u201d) to literally \u201call people\u201d or \u201call humanity\u201d. \u201cAs many people as possible\u201d lies between these extremes, but is underspeci\ufb01ed when one does not know what constitutes the possible. Further underspeci\ufb01ed terms are the \u201cbroadly shared prosperity\u201d and the \u201cwidely shared ethical ideals\u201d (see Section 3 for possible referents). The wordings also leave room for di\ufb00erent distributions of the bene\ufb01ts, and they make no statements about how to negotiate multiple and possibly con\ufb02icting ideals, values, and notions of what is good. Many of these questions have been and are being debated in the wider literature on the Common Good, which is the subject of the following section. 2.2 Some questions regarding the Common Good, inspired by the notion from political philosophy The Common Good has been discussed widely and controversially by many authors in political philosophy, and it is impossible to survey this literature in the scope of this article.3 Instead, I will very brie\ufb02y present some issues that raise relevant questions for the interpretation of the concepts proposed in AI. The Common Good has been de\ufb01ned as \u201cthat which bene\ufb01ts society as a whole\u201d [6]. But how are these elements (the \u201cthat\u201d, \u201cbene\ufb01t\u201d, \u201csociety\u201d) de\ufb01ned? Hussain [7] gives more details about the that: \u201cthe common good is [. . . ] part of an encompassing model for practical reasoning among the members of a political community. [. . . ] The relevant [interests and facilities that serve these interests] together constitute the common good and serve as a shared standpoint for political deliberation. [. . . ] The relevant facilities may be part of the natural environment (e.g., the atmosphere, a freshwater aquifer, etc.) or human artifact (e.g., hospitals, schools, etc.). But the most important facilities [. . . ] are social institutions and practices.\u201d One example of such institutions and practices is a scheme of private property. Fundamental rights / human rights (\u201cbasic rights and freedoms\u201d) are parts of the Common Good [7]. Finally, I will use values interchangeably with \u201cinterests\u201d for the purposes of the present article. The notion of bene\ufb01t also invites di\ufb00erent readings: is it an individual\u2019s or a group\u2019s utility in a welfare consequentialist sense, and/or is it based on values beyond this? (Hussain [7] favors the latter reading, but also reports on alternative conceptualizations of the Common Good.) Finally, the questions of what the boundaries of the relevant society (or: political community and its members) are, and of whether to take a welfare consequentialist or other standpoint, and whether and how to account for collective above individual interests [8], tend to receive less attention than others in \u201cfor Good\u201d initiatives, and will therefore not be considered further here. But even the de\ufb01nitional elements of facilities, interests, and practical reasoning raise further questions. The following is a selection that contributed to the choice of lead questions proposed below. A \ufb01rst questions is: Who de\ufb01nes the Common Good (or the interests and facilities) and how? Political philosophy distinguishes between substantive and proceduralist conceptions of the Common Good. Substantive conceptions specify what factors, goods, values, etc. are bene\ufb01cial and shared. Proceduralist conceptions instead focus on what procedures are adequate to collectively negotiate and de\ufb01ne what is bene\ufb01cial. The expression \u201csubstantive value\u201d is intended to denote the unassailable status of the value as something that can stand on its own and requires no justi\ufb01cation. Yet that status is logically dependent on the attribution of the speaker, who categorizes the value as such. Any such self-supporting value is easily challenged by denying the attribution. Substantive values and their attribution have come under speci\ufb01c political and philosophical attacks after the atrocities \fAI for the Common Good?! 4 of 20th century authoritarian regimes, who all professed to act in the interest of some common good, an \u201cattempt to make heaven on earth\u201d that \u201cinvariably produces hell.\u201d [9] Proceduralist notions of the Common Good rely on democratic structures and deliberation; it need not be known a priori which facilities and interests will be agreed upon through these processes, see [10]. Even if the focus of proceduralist notions is on process, this does not mean that there are no substantive elements, e.g. [8]. The need for substantive elements can arise from what Popper called the tolerance paradox (if a society is tolerant without limit, this tolerance can be abused or even destroyed by the intolerant). Countermeasures include constraints on the forms the deliberation can take (e.g., that citizens recognize each other as equal and use only reasons that can be accepted by all others [11]) and legal constructs that enable and require a country\u2019s political bodies to protect the political order against those who want to abolish them, such as constitutional clauses that cannot be abolished even by a majority (\u201cmilitant democracy\u201d, cf. [12]). Another distinction is that between communal and distributive conceptions of the Common Good. A communal conception takes the Common Good interests to be interests that citizens have as citizens, whereas a distributive conception is based on the acknowledgement that citizens belong to various groups with distinct interests, that these interests compete for the facilities and resources and may pose di\ufb00erent demands, and decisions and allocations need to be made according to some distributive principle [7]. 2.2.1 From questions about the Common Good to questions about AI for the Common Good The philosophical considerations about the Common Good that have been summarized very brie\ufb02y in the previous section served as starting points for the questions proposed in Section 4. Here, I will give an overview of the link between the considerations and questions. The considerations above indicate that purely substantive accounts of the Common Good are problematic, that procedures are important, and that groups with di\ufb00erent interests and demands may have different notions of the Common Good. These groups correspond to what computer science calls stakeholders. These considerations were one inspiration for the \ufb01rst two lead questions: What is the problem, and who de\ufb01nes it (see Sections 4.1 and 4.2)? A second inspiration was that these same two questions have proved constructive in interdisciplinary collaboration around a speci\ufb01c Common Good interest and facility: \u201cprivacy\u201d [13]. Note that the de\ufb01nitional duality of \u201cGood\u201d and \u201cproblem\u201d introduced in the previous paragraph is frequent in AI: some value or aspect of the Common Good is missing, de\ufb01cient, or under attack, and this constitutes a problem. The problem then prompts a search for a technological contribution to solving or at least addressing this problem. The focus on a (usually technological) solution is the reason to ask a modi\ufb01ed version of the \ufb01rst lead question: What is a problem in the \ufb01rst place (see Section 4.3)? \u201cAI for the Common Good\u201d is understood here (and in the surveyed literature) in an engineering sense. Thus, AI methods, technology, and their deployment cannot be an interest, but a facility (or part of it) that serves an interest. This raises the question: what kind of facility is or should this be? I will argue that today, this is mostly some form of knowledge that is then fed into further decision processes. (Another candidate are autonomous systems, which would require a di\ufb00erent analysis.) This inspired the third lead question concerning what the role of knowledge is (see Section 4.4). The fourth lead question asks about important side e\ufb00ects and dynamics (see Section 4.5). This can be related to the Common Good literature in that this literature also investigates what is likely to happen under di\ufb00erent structures of people acting, deliberating, deciding, and collaborating. 2.3 AI and Data Science for (Social) Good At this point in the argument, one would normally investigate how concrete projects or initiatives (rather than abstract ethics codes) de\ufb01ne the Common Good. \fAI for the Common Good?! 5 However, the call to develop and deploy AI for the Common Good has, to the best of my knowledge, not yet led to research programs or publications under that name. At the time of writing of this article (August 2018), a Google search returned three texts. The phrasing \u201cAI for Common Good\u201d has been used in the titles of two recent white papers, one prepared for attendees of the 2018 World Economic Forum Annual Meeting [14], and one by North Highland Consulting [15]. Both focus on highlighting threats posed by Arti\ufb01cial Intelligence; what the Common Good is and how to use AI towards it is not explicated. In addition, an entry in the Communications of the ACM\u2019s news, titled \u201cAI for the Common Good\u201d [16], reports on the AI for Good Global Summit, whose goal de\ufb01nition is given below. However, related ideas have a longer tradition and have led to several conferences and research programs. To outline the \ufb01eld, I have selected four that I believe are most in\ufb02uential and representative of \u201cAI for (some version of) Good\u201d. The selection was based on the duration of the initiative (at least two editions) and/or the backing by an important professional association (AAAI) or an important international actor (the UN). Since the initiatives present their de\ufb01nitions on their Web pages rather than in scienti\ufb01c publications, some interpretation is needed and will be supplied in the following paragraphs. Initiatives around AI and Data Science will be presented, for several reasons. First, \u201cfor Social Good\u201d originated as an initiative from data science, second, data science is one key area of, or related to, current AI (for details, see the de\ufb01nitions in Section 3), such that, third, many contributions to conferences on AI for (Social) Good are or contain data science. The Data Science for Social Good (DSSG) initiative has organized, since 2013, an annual \u201csummer program for aspiring data scientists to work on data mining, machine learning, big data, and data science projects with social impact. Working closely with governments and nonpro\ufb01ts, fellows take on realworld problems in education, health, energy, transportation, and more.\u201d4 The \ufb01rst part of this de\ufb01nition is strictly speaking not very speci\ufb01c, since many uses of AI have social impact, including large social networks, search engines, and (inter)national surveillance programs. This shifts the de\ufb01nitional core of \u201cSocial Good\u201d to the domains of projects (such as education, health, energy and transportation) and to the actors (governments and nonpro\ufb01ts) who are likely to be the ones to de\ufb01ne project goals and/or who control and provide the data. A de\ufb01nition via domains and actors is also used by the Data Science for Social Good (SoGood) workshop series that has so far seen three consecutive editions at ECML PKDD, a major conference on machine learning, data mining, and data science: \u201chow Data Science can and does contribute to social good in its widest sense, including areas such as: Public safety and disaster relief, Access to food, water, and utilities, E\ufb03ciency and sustainability, Government transparency, Data journalism, Economic development, Education, Social services, Healthcare. We are interested both in non-pro\ufb01t projects and in projects that, that while not de\ufb01ned as non-pro\ufb01t, still have Social Good as their main focus, and so have managed to build a sustainable business model.\u201d5 A focus on DSSG projects\u2019 problem-solving is suggested by [17]: DSSG consists of \u201cattempts to solve complex social problems through the use of increasingly available, increasingly combinable, and increasingly computable digital data\u201d. With a method scope of AI in general (rather than DS in particular), the Association for the Advancement of Arti\ufb01cial Intelligence held a spring symposium on \u201cAI for the Social Good\u201d in 2017.6 The AAAI Spring Symposia center on emerging topics in AI; hence, this is an indication of the endorsement of the \ufb01eld, by a major professional association. \u201cAI for the Social Good\u201d is de\ufb01ned as AI \u201caddressing societal challenges, which have not yet received signi\ufb01cant attention by the AI community or by the constellation of AI sub-communities, [the use of] AI methods to tackle unsolved societal challenges in a measurable manner.\u201d7 Another venue de\ufb01nes the \ufb01eld by declaring \u201calmost any real-world problem, which is important for society\u2019s bene\ufb01t, and could potentially be solved using AI techniques, [to be] within the ambit of this symposium.\u201d8 This de\ufb01nition reiterates the idea of \u201cbene\ufb01t for society\u201d, see Section 2.2, and the focus on problem-solving. With a method scope of \u201cGood\u201d in general (rather \fAI for the Common Good?! 6 than \u201cSocial Good\u201d in particular), the ITU, the UN agency responsible for issues that concern information and communication technologies, leads the \u201cAI for Global Good Summit\u201d. The Summit has so far been organized twice (2017 and 2018). The goal is described as \u201cAI innovation [being] central to the achievement of the United Nations\u2019 Sustainable Development Goals (SDGs) by capitalizing on the unprecedented quantities of data now being generated on sentiment behavior, human health, commerce, communications, migration and more\u201d, including goals such as \u201cno poverty\u201d, \u201czero hunger\u201d, and \u201cgood health and well-being\u201d.9 These and most of the 14 other SDG goals have a substantive focus. More speci\ufb01c societal goals, for example fairness (non-discrimination), are pursued by research communities such as Fairness, Accountability and Transparency in Machine Learning and beyond.10 Another example is the protection of privacy as the goal of various research communities including (in DS) privacypreserving data mining and data publishing. In addition, funding programs with similar goals exist. I have been part of two \u201cprojects with a primary societal \ufb01nality\u201d funded by the Flemish Science Council FWO. In these projects, multidisciplinary consortia (of which AI was only one partner) investigated privacy in online social networks and diversity in media, respectively. The methodological and ethical debates in these projects have been an important source of inspiration for the current article. Since these goals are quite distinct from (or at least much more speci\ufb01c than) the Common Good, these speci\ufb01c notions of the Good will not be investigated further here. In sum, \u201cthe Common Good\u201d is referred to as a goal for AI in current publications, but not de\ufb01ned. Concrete current initiatives refer to the \u201cSocial Good\u201d or simply \u201cthe good\u201d, circumscribing it via problem domains and the identity of the non-academic project partners (nonpro\ufb01ts or governments), or via substantive goals agreed upon at UN level. It appears that, as a common denominator, the intended bene\ufb01ciaries of AI for Good, for Social Good, etc. will generally not be the ones who directly pay for the development or use of this AI. Thus, unlike for example in commercial application areas, no market price can serve as indicator of value. \u201cThe Social Good\u201d is then the indicator of such value. Healthcare is an interesting example: while it can be a pro\ufb01t-oriented business model, the focus in the domain of AI for (Social) Good appears to lie on the provision of healthcare to broader sections of society (see \u201cfor all\u201d above). The historical experience suggests that such a provision requires some kind of national insurance \ufb01nancing scheme based on solidarity rather than payment-for-service. Thus, is any contribution of AI to better healthcare methods already \u201cAI for Good\u201d, or is more needed? It is likely that such questions will be asked in the further development of the \ufb01eld. 3 Terminology: AI, Data Science, knowledge, and ethicsin-AI In this section, further key terms used in this article will be de\ufb01ned. Arti\ufb01cial Intelligence (AI) is \u201cthe ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience\u201d [18]. In line with the ACM Computer-Science subjects classi\ufb01cation, I regard AI as a sub\ufb01eld of computer science, and as such a \ufb01eld at the intersection of science and engineering. Machine Learning (involved in particular in the last three characteristics of the preceding list) is a \ufb01eld of AI, as encoded for example in the ACM Computer-Science subjects classi\ufb01cation in its most recent (2012) version.11 Data Science (DS) is understood here as a sub\ufb01eld of AI, or more speci\ufb01cally as a \ufb01eld that (a) is situated in many universities within AI groups and (b) draws heavily on methods developed or used in machine learning and data mining. (The machine learning aspect corresponds to the focus on learning models of data, and the data mining aspect to the focus on entire knowledge-discovery work\ufb02ows.) More gen\fAI for the Common Good?! 7 erally, data science has been de\ufb01ned as \u201cthe science (or study) of data\u201d and \u201ca new interdisciplinary \ufb01eld that synthesizes and builds on statistics, informatics, computing, communication, management, and sociology to study data and its environments (including domains and other contextual aspects, such as organizational and social aspects) in order to transform data to insights and decisions by following a datato-knowledge-to-wisdom thinking and methodology\u201d [19]. Conway [20], in a frequently cited online source, described data science by means of a Venn diagram: with regard to machine learning, data science is situated in the intersection of machine learning and substantive expertise. Knowledge is understood in two ways. On the one hand, I will refer to a notion of knowledge as used in psychology: \u201ca structured collection of information that can be acquired through learning, perception or reasoning\u201d [21], understood to be held by a human agent (mental representation). It is the knowledge about something in the world, the expertise [17] that generally draws on many sources and \ufb01elds, such as di\ufb00erent academic disciplines. Such human knowledge can be both input to a research or development activity, and its eventual result. On the other hand, I will refer to knowledge as the more immediate output of an AI or DS activity. Russel and Norvig [22, p. 16] implicitly de\ufb01ne knowledge as a structured collection of \u201cinformation [...] put into a form that a computer can reason with\u201d. The \ufb01eld of knowledge discovery from databases and the related \ufb01elds of data mining and data science focus on knowledge in the sense of \u201cnovel, valid, potentially useful, and ultimately understandable patterns in data\u201d [23] \u2013 where the intended recipient, who can use and understand these patterns as structured representations, is often but not necessarily human. In all these meanings, knowledge is structured information, useful and/or understandable to a person or machine. In this article, the very general term \u201cAI\u201d will be used to denote research and projects that involve the processing and analysis of knowledge and data, often with machine learning / data mining methods. This interpretation corresponds to the strong representation of data science projects at least in the \u201cAI for Social Good\u201d literature, see Section 5.2. Speci\ufb01c references to data science and machine learning / data mining will be made when necessary. A \ufb01nal note concerns the question: Whose ethics? Robotics as such is not in the focus of the present article, but robots are relevant to the focus on AI and ethics. Not all arti\ufb01cial intelligence is incorporated into robotics, and not all robots are arti\ufb01cially intelligent. However, the intersection is large and relevant, and I regard such AI robots as typical representatives of what the Asilomar Principles call \u201chighly autonomous AI systems\u201d. Regarding these, three types of ethics are relevant. The \ufb01rst type is the professional ethics of the researcher or practitioner (often referred to as computer ethics). This is guided by principles such as the Asilomar Principle 11) \u201cHuman Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.\u201d Arguably, these ideals are widely shared, codi\ufb01ed for example in the Universal Declaration of Human Rights.12 Thus, one may interpret the \u201cwidely shared ethical ideals\u201d of Principle 23) as consisting of these four, and possibly also others. (According to Principle 23), \u201c[s]uperintelligence should only be developed in the service of widely shared ethical ideals and for the bene\ufb01t of all humanity rather than one state or organization.\u201d) The second type is the professional ethics of a researcher or practitioner who develops robots. Following Veruggio [24], I refer to this as roboethics: \u201cRoboethics is not the ethics of robots nor any arti\ufb01cial ethics, but it is the human ethics of the robots\u2019 designers, manufacturers, and users.\u201d Asilomar Principle 10) constrains these design, manufacturing and use activities by positing \u201cValue Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.\u201d Note that this formulation focuses on the goals and behaviors of a robot and asks that these be aligned with human values (presumably those of Principle 11) listed above), and that it does not make a commitment as to whether these goals and behaviors stem from the humans designing, manufacturing, or using the robot, or from the robot itself. Thus, the ques\fAI for the Common Good?! 8 tion of whether machine ethics [25, 26, 27] as a third type of ethics, the ethics of a robot (or in fact any AI system), exists and if so, what its properties are, is left open. In line with this, the remainder of this article will focus on computer ethics / roboethics as human ethics in the sense described above. 4 How to create \u201cAI for the Common Good\u201d: Four lead questions In this section, I will analyze four speci\ufb01c characteristics of AI thinking and practice that challenge and may impede design for the Common Good: the problem-solving and solutionism mindset of the engineer, the integration of stakeholders, the role of knowledge, and side e\ufb00ects and dynamics. The questions will be illustrated by references to a running example from the domain of \u201cAI for Social Good\u201d. The example itself is, intentionally, not a real example in the sense of being the contents of a speci\ufb01c AI paper, report or otherwise \u2013 because the point of the present article is not to denigrate the merits of any particular project. Instead, the example is a \ufb01ctitious synopsis of uses of AI/DS in various contexts, with these uses all focusing on the same issue. The example centers on drugs, considered by some to be \u201cpublic enemy number one\u201d13, that is, the ultimate \u201cCommon Bad\u201d, whose absence would surely enhance the Common Good. While traditionally, such statements targeted illegal drugs, the recent US opioid crisis, which was declared a Nationwide Public Health Emergency14, has highlighted how a similar problem can originate from a substance that may be legally prescribed or illegally peddled. The opioid crisis also illustrates how public health and criminal justice issues continue to interact: Within three paragraphs of one political speech, US President Trump lauded an initiative that caused people to turn in more than 900,000 pounds of unused or expired prescription drugs, and the arrest of criminal aliens with 76,000 charges and convictions for dangerous drug crimes.15 The rhetoric around the opioid crisis has brought one question back into sharp focus: There is a problem, but what exactly is wrong? This leads to the \ufb01rst lead question Q1. 4.1 Q1: \u201cWhat is the problem?\u201d Consider the example: What is \u201cthe drug problem\u201c? Here is a non-exhaustive list of candidates: 1. People use drugs. 2. People sell drugs. 3. Certain people (e.g. the poor, black people, ...) use drugs. 4. Drug users commit crimes. 5. Drug users become homeless, ill, ... 6. Drug users die (earlier than they would have without drugs). 7. There aren\u2019t enough drugs available. These alternative de\ufb01nitions are designed to represent, in a simpli\ufb01ed way, the views of di\ufb00erent people who are a\ufb00ected by drug usage and its consequences.16 If, therefore, a system to \u201csolve\u201d the drug problem, by AI or otherwise, is created, the question of who de\ufb01nes the problem counts. 4.2 Q2: Who de\ufb01nes the problem? A necessary condition for designing systems that further the Common Good is to hear the voices of multiple and diverse people who will be a\ufb00ected by the system. The integration of multiple stakeholders in requirements engineering (for an overview, see [28]) are therefore increasingly required by ethics codes as well as laws. Codes such as the ACM Code of Conduct [3]17, the AOIR Recommendations for Ethical Decision-Making and Internet Research [29]18, and the IEEE Ethically Aligned Design guidelines [5]19 explicitly call for this. A special form of multistakeholder requirements elicitation has recently even attained the status of a legal obligation: the European Union\u2019s new data protection law, the General Data Protection Regulation (GDPR), requires \fAI for the Common Good?! 9 a data protection impact assessment before personal data are collected and processed. (While the current article focuses on ethics codes, I will make some references to the GDPR as an example of a current and wide-ranging attempt to codify rules for technology, including AI, to protect individuals\u2019 rights and freedoms.) As part of such processes, \u201c[i]t is understood that there will be clashes of values and norms when identifying, implementing, and evaluating these systems (a state often referred to as \u2018moral overload\u2019)\u201d [5, p. 23], so con\ufb02ict resolution methods and processes are required. The study of democratic methods for gathering and negotiating requirements is a sub\ufb01eld of requirements engineering [30]. However, an ongoing challenge remains: how to best support democratic deliberation and conceptions of distributive justice with software and/or software engineering methods. Con\ufb02ict identi\ufb01cation and resolution become more di\ufb03cult when stakeholders are di\ufb00erentially able to cause a clash in the \ufb01rst place, because they are embedded in socio-technical systems di\ufb00erently and differ in their abilities to perceive and voice their values and norms. As an example, consider imprisoned drug users as one of the relevant groups in [31]. Further problems arise when a\ufb00ected communities and individuals are outside the boundaries of the society deemed relevant in the respective notion of the Common Good. For example, inhabitants of countries such as Colombia, in which drugs are grown and who su\ufb00er from the local e\ufb00ects of drug cartels\u2019 power, have argued that they are outside the consideration of drug consumers in the West [32]. In addition, it is questionable whether talking to di\ufb00erent stakeholders is enough, because it may not a\ufb00ect the structural mold into which these di\ufb00erent stakeholders\u2019 utterances will be put: the very notion of what a problem \u2013 any problem \u2013 is. This will be investigated next. 4.3 Q1\u2019: \u201cWhat is a problem?\u201d (and thereby: What is the problem \u2013 revisited) While conceivably no AI researcher or practitioner would be as preposterous as claiming to \u201csolve the drug problem\u201d, AI approaches do focus on a version of the problem (usually implicitly speci\ufb01ed to be smaller). This is evidenced by the above-mentioned reference to the use of AI on \u201c[a]lmost any real-world problem, which is important for society\u2019s bene\ufb01t, and could potentially be solved using AI techniques\u201d. This is germane to the discipline and the \u201cproblemsolving mindset\u201d of the engineer. 4.3.1 Problem-solving In everyday language, a problem is \u201ca matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome\u201d.20 In some cases, \u201cdealing with and overcoming\u201d may be relatively straightforward. Using another drugs example: if the problem is a person\u2019s breathing being suspended due to an overdose, this problem can be dealt with and overcome by the proper administration of Naloxone. However, most real-world problems, including many around drug overdosing, are more complex. For example, subjecting a heroin addict to a methadone program can \u201cdeal with\u201d the heroin addiction, but it does not necessarily \u201covercome\u201d it. In addition, many real-world problems are open-ended. For example, there is probably no \u201covercoming\u201d the fact that in any society, many people overuse or misuse legal or illegal drugs \u2013 but this does not alleviate societies from the responsibility to \u201cdeal with\u201d drug usage. On the other end of the scale, there are chess problems: \u201can arrangement of pieces in which the solver has to achieve a speci\ufb01ed result\u201d or mathematics/physics problems: \u201can inquiry starting from given conditions to investigate or demonstrate a fact, result, or law.\u201d Engineering problems may, but do not have to, start from the everyday notion \u2014 but the engineering approach rests on transforming whatever the starting point is into a well-de\ufb01ned designation: moving from \fAI for the Common Good?! 10 a stateA (undesirable situation) to a stateB (desirable situation) [33]. Only once the \u201cproblem\u201d has this well-de\ufb01ned shape, can the engineer begin to \u201csolve\u201d it. This is what makes engineering precise, graspable and powerful. However, most social problems are not chess problems, and they do not exist in context-free structures. Therefore, en route to the de\ufb01nition of an engineering problem, one must usually make some assumptions that are hard or even impossible to formalize. Ex post, the ambivalence that existed in the beginning tends to be cognitively minimized and the result is taken to be the truth (even if it did result from decisions that might just as well have been made otherwise), cf. [34]. The result is, partly, that the de\ufb01nition of the problem often arose from the opinions of only one or only a few stakeholders. Multistakeholder methods and participatory software design are approaches for addressing this issue. However, regardless of which and how many stakeholders have been consulted, formalization remains a necessary step. The risk of conceiving of social problems in terms of engineering problems is to blind oneself to the vagaries of the formalization step, and to fail to consider alternatives to the chosen formalization. 4.3.2 Problems and \u201csolutions\u201d Likewise, what do we consider as \u201csolutions\u201d? This will depend on the context in which problem-solving takes place. Consider several standard problemsolution pairs, with solution approaches from law, law enforcement, and public health, in di\ufb00erent countries. The current discussion will focus on illegal drugs. In general, problems 1-7 above exist for legal drugs too, but are addressed di\ufb00erently. In the War on Drugs that began in the Philippines in 2016, attention has focused on the \ufb01rst and second of the problem versions from Section 4.1 above (henceforth, #1 and #2). The \u201csolution\u201d proposed in the election campaign of President Duterte as well as enacted in a large number of cases was to sti\ufb02e both supply and demand by killing drug dealers and drug users, cf. [35]. Another \u201csolution\u201d approach consists of criminalization and incarceration laws and policies for drug dealing and use, even of small quantities, as in the US during recent decades. Some countries exempt the ownership and consumption of small quantities of illegal drugs from prosecution (i.e. prioritize #2 in the problem de\ufb01nition). Various societal actors and authors (e.g. [36, 37]) have argued that problem de\ufb01nition #3 stands behind US laws that penalize the use and dealing of drugs traditionally associated with poor and black users (crack cocaine) far heavier than that of similar drugs traditionally more prevalent among a\ufb04uent white users (powder cocaine). The identity of drug users in problem de\ufb01nition #3 can even become associated with a (re-)framing both of \u201cthe problem\u201d and \u201cthe solution\u201d. This point has been made after US President Trump, in October 2017, declared the opioid crisis (which a\ufb00ects many poor white people) a public health emergency rather than another type of drugs on which to wage war [38]. #4 can be a problem in at least two ways: #4.1 when drug users commit crimes such as theft or prostitution to \ufb01nance their addiction, or #4.2 when intoxicated people become uninhibited and/or aggressive and then commit crimes such as assault and murder. #4.2 is an often-voiced observation concerning the legal drug alcohol, and \u201csolutions\u201d contain penalties for driving when intoxicated. (Beyond that, for legal drugs there tends to be a separation between permitting the intoxication as an expression of personal freedoms, and the sanctioning of crimes if they occur.) #4.1, on the other hand, is in many cases tightly linked with #5, and programs such as substituting methadone for heroin in (a) legal, (b) insurance-covered, and (c) medically administered ways are o\ufb00ered as partial \u201csolutions\u201d. If #6 is considered the main problem, the question for a solution may need to turn from the ubiquitous attempts to reduce consumption to accepting the fact that consumption and overdosing happen, and counteract the lethal e\ufb00ects of overdoses as they appear. These approaches from law, law enforcement and healthcare are not, or not in their entirety, AI-based. AI is used to address parts of the problems, as when predictive policing software recommends where to patrol for drugs [39]. The AI models underlying such software need well-de\ufb01ned and accessible data and well-de\ufb01ned objective functions whose maximization constitutes a \u201csolution\u201d. This can lead to police de\fAI for the Common Good?! 11 ciding to patrol where there have been many drugrelated arrests in the past, rather than where there has been much drug usage, and it will commit countermeasures against such biased decisions to one formalization of \u201cfairness\u201d, which may mean that other notions of fairness are violated [40, 41]. Such sidee\ufb00ects of AI \u201csolutions\u201d should be kept in mind. To keep the unavoidable restrictions more visible, references to \u201cproblems\u201d and \u201csolutions\u201d should be replaced by terms that are more clearly technical and limited in scope, such as \u201cthe task to be done\u201d. In the example, the task could be to decide where to patrol. Once the task is clear, the question becomes what to do or communicate. 4.4 Q3: What is the role of knowledge? Above, I have argued that the transformation of a social problem into a formal problem poses challenges when the goal is to contribute to the Common Good. In this section, I will study two challenges related to this transformation, both of them related to knowledge. The \ufb01rst concerns the knowledge that enters (or fails to enter) into the transformation into a formal problem. The second concerns the reverse direction: what happens when the output or \u201csolution\u201d of this formal problem is a piece of generated knowledge? And what side-e\ufb00ects arise when both input and output are (necessarily) entangled with the knowledge of the AI developer, funder, ...? How may AI methods themselves a\ufb00ect such knowledge e\ufb00ects? What consequences may these constellations have on the Common Good? AI and in particular DS are strongly linked to knowledge: The goal of AI is often described in a procedure-oriented way, such as in the de\ufb01nition presented in Section 3: to \u201c[develop] systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience\u201d. Yet, early on in the history of AI, it became clear that only focusing on algorithms (e.g. for reasoning) will not produce intelligence, but that strong knowledge bases are also needed. Terminology and valuations of logics-based approaches change over time, and currently, the reliance on knowledge is often expressed as a reliance on (big) data instead. In other words, AI systems and their outputs are considered useful if they work on large and rich data or knowledge, and if their outputs present new information that humans derive further knowledge from and act upon.21 4.4.1 The power of knowledge: AI and framing e\ufb00ects The framing of a problem denotes the way it is described. The framing of a problem (and its associated \u201csolutions\u201d) has a powerful e\ufb00ect on how people \u2013 and thus the public as those who may bene\ufb01t, or be harmed by an AI system \u2013 perceive the world and act in it, e.g., [42]. Frames can themselves be objects of knowledge, that is, statements can be made about them and discussed. For example, di\ufb00erent frames can be identi\ufb01ed and compared as alternatives. Designers should be aware that their knowledge-based methods operate in an environment \ufb01lled with politically and otherwise induced frames and that association-based methods tend to reinforce these frames. These frames come loaded with certain versions of the notion of the Common Good (and many more blind spots regarding it). To work in the interest of the Common Good, an AI researcher or practitioner should be aware of this fact and make conscious (and transparent) choices about whether to sustain frames or expose them. Framing interacts with AI for the Common Good on multiple levels. First, frames contribute \u2013 or could contribute \u2013 to problem de\ufb01nition. Second, AI, by the way it processes knowledge, can serve to reinforce such frames. Third, frames operate not only on the level of problems and solutions, but also on the level of methods. These e\ufb00ects will be considered in turn. Frames emphasize certain aspects of reality, and they suppress or even block others. One consequence is that speci\ufb01c views \u2013 including but not limited to the perception of what the problem is by speci\ufb01c stakeholder groups \u2013 can remain suppressed, or that the knowledge that certain solutions do not work is blocked. An example is the trope of the \u201cWar on Drugs\u201d. Launched in 1969 by then-US president Richard \fAI for the Common Good?! 12 Nixon, this may have been in\ufb02uenced by concerns over public health and the su\ufb00ering induced by abuses of illegal drugs. However, an alternative view has co-existed with this, succinctly described by Nixon\u2019s then counsel and Assistant to the President for Domestic A\ufb00airs: \u201cThe Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. [...] We knew we couldn\u2019t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.\u201d [43]22. Baum\u2019s view changes the roles of pieces of knowledge: In the War on Drugs frame, the drug-taking is the problem, and the solution is (or at least involves) arresting people, raiding their homes, breaking up their meetings, and using the media to vilify them. In the Baum frame, the existence or growing in\ufb02uence or expectedly growing in\ufb02uence of antiwar and black citizens is the problem, and an (at least intermediate) solution is to wage a War on Drugs. Independently of whether one follows Baum\u2019s sinister interpretation of the political will of the Nixon campaign, there is now a more widespread acceptance of criminalization and incarceration not having \u201csolved\u201d anything at all [43]: (a) legal alternatives to criminalization exist, have been or are being tested in di\ufb00erent countries, and have often led to measurable improvements in health and crime statistics, (b) legal drugs (in particular alcohol) and prescription drugs (such as opioid painkillers) a\ufb00ect far more people than illegal drugs and cause enormous human su\ufb00ering and economic costs, and (c) the \u201ccure\u201d of criminalization imparts su\ufb00ering and creates new problems, also for the Common Good. Frames tend to be reiterated and \u201cechoed\u201d, and they can survive even in the face of clear scienti\ufb01c evidence that contradicts a frame. Arguably, much of current drug policy rests on reiterated, \u201cechoed\u201d frames that have been constructed and perpetuated for a number of decades now. One example is the perpetuation of the focus on criminal justice even in the context of the National Public Health Emergency proclaimed by US President Trump, see above. Another example is the controversy around David Nutt\u2019s two studies in The Lancet arguing that alcohol and tobacco are far more dangerous than common illegal drugs. Among other things, Nutt was dismissed from the UK government Advisory Council on the Misuse of Drugs, see the summary and links in [44]. Mass media have long been known to be in\ufb02uential in relaying, reinforcing and maintaining frames [45], creating \u201cecho chambers\u201d. Social media have recently been described as intensifying the echochamber e\ufb00ects that media often have anyway, e.g. [46, 47, 48]. This is where AI enters: recommender systems (the backbone of modern social media platforms\u2019 approach to addressing information overload) work on associations learned from past data and thereby tend to further strengthen these e\ufb00ects, e.g. [49]. Framing operates not only at the level of problems and solutions, but also on the method level. This can have wide-ranging e\ufb00ects on decisions for example in data-science related projects. For example, structuring a decision-making process and tool by \ufb01rst identifying potential harms at a general level and then weighing them against speci\ufb01c and contextual potential bene\ufb01ts \u201cwould always, it seemed, come out in favour of intervening and therefore in favor of the data sharing that would enable intervention\u201d [50, pp. 4-5]. In an analysis of big data approaches to epidemics in high-income vs. low-/middle-income countries, the conception of populations as well-informed individuals versus as pathogen-carrying groups may imply that \u201cbig data models built to facilitate individuals\u2019 well-being and autonomy instead would constitute perfect tools for mass control and surveillance\u201d [51, p. 30]. Finally, the correlation-based nature of data mining itself has e\ufb00ects. Data science models used in the criminal justice sector have been criticized widely for their e\ufb00ects of reproducing societal biases against minorities, cf. the proceedings of the FAT(ML) conferences as a speci\ufb01c branch of AI for Social Good, see Section 2.3. As Barabas et al. [52] point out, there are however risks for all, regardless of minority status, when the underlying epistemic assumption is that persons \u201chave\u201d crimi\fAI for the Common Good?! 13 nal tendencies innately correlated to their features and the intervention focusses on decisions about bail or incarceration, rather than on causal factors and the possibility to change them via e\ufb00ective diagnosis and intervention of criminogenic needs. In sum, frames a\ufb00ect AI projects, and they can be reinforced by AI techniques. Framing may be unavoidable, but it needs to be re\ufb02ected and \u2013 if appropriate \u2013 counteracted. Methods such as those designed to detect frames and bias, and to increase diversity in recommendation [53], can be components of a more critical stance towards these mechanisms. 4.4.2 Limits of imparting knowledge: does it work? Given the strong e\ufb00ect of framing, someone who ignores frames and believes that they can immediately discern \u201cthe facts\u201d in information they consume, or can immediately convey \u201cthe facts\u201d in information they produce and communicate, underrates knowledge. At the same time, many presentations of AI tend to overrate knowledge in the sense that they suggest that the existence of knowledge can by itself solve problems. Such overrating can occur when knowledge is imparted for awareness raising. This is the goal of awareness tools in general (see [54] for an overview speci\ufb01cally with regard to privacy awareness tools) and today is found in many quanti\ufb01ed-self apps. Basically, everybody (including drug users) knows that drugs are bad for health, family life, socio-economic status, etc. \u2013 but this does not stop an addict from consuming the drug when the opportunity arises. In general, the limitations of \u201cjust informing\u201d have been studied intensively in recent years by, for example, behavioral economists, and alternatives such as \u201cnudging\u201d have been investigated. These approaches rest on acknowledging that \u201cknowledge is not all\u201d and that decision-making is in\ufb02uenced by a wider range of factors than just classical rationality. One important group of factors are social in\ufb02uences. Many quanti\ufb01ed-self apps and related applications, including those around substance abuse or addiction problems23, try not only or not at all to impart knowledge, but rather to help build and maintain social-support groups. In their focus on using IT as communication technology, many of these apps are not AI-based. So where does AI come in? A very good example is Bird et al.\u2019s [31] use of data science methods informed by de\ufb01nition #6 of \u201cthe drug problem\u201d. The authors used a data-science analysis to show the prevalence of lethal overdoses among addicts recently released from prison (at which time addicts are even more vulnerable than usual due to the enforced abstinence while in prison), and then instead of arguing for an awareness campaign \u201cto avoid drugs\u201d or \u201cavoid overdosing\u201d, handed out emergency overdose kits and gave basic information on how to administer the antidote. They showed, again with methods from data science, the e\ufb00ectiveness of their intervention: the number of deaths decreased signi\ufb01cantly. All these \u201csolutions\u201d rely on certain de\ufb01nitions or framings of problem and solution, and all of them depend on various factors determining decision making. Data scientists and AI designers can draw on methods for supporting these factors and decisions studied in human-computer interaction [55], but as the Bird et al. example suggests, they should also be ready to think outside the box and embrace solutions in which they may only shine as diligent data analysts in the background, rather than as providers of smart knowledge-based tools in the foreground. But even when imparting knowledge works, is it always a good thing? 4.4.3 Limits of imparting knowledge: is it good? AI and in particular DS often appear to operate on the assumption \u201cThe more knowledge, the better\u201c. This idea is applied to individuals as potential holders of knowledge, and it also impinges on the idea of the Common Good: \u201cThe more knowledge society has, the better\u201d. But is this always the case? At the individual level, there are certain wellknown problems. First and as argued in the previous section, imparting knowledge may not work in the envisaged way. It may also manipulate people and negatively impact their autonomy, or hurt them in other ways [13]. It may place undue burden on them \fAI for the Common Good?! 14 by making them responsible for tasks they lack the mental, \ufb01nancial, temporal, etc. capabilities for (\u201cresponsibilization\u201d, [56]). A growing number of legal and ethics guidelines recognize such limits. These include culturally-grounded restrictions on imparting knowledge [57] as well as \u201cthe right to not know\u201d in bioethics. Not knowing certain things is also recognized as a helper against unconscious biases, and it can therefore have economic advantages. The properties of such forms and conventions of not-knowing are investigated in the \ufb01eld of ignorance studies [58]. If imparting knowledge is not necessarily bene\ufb01cial, a parsimony principle can be useful: focus on the task at hand and get and use the knowledge needed for it, but not more. This principle is inspired by ethical, legal, and general intellectual principles. The Nuremberg Code, an early and highly in\ufb02uential code of research ethics, posits: \u201cThe experiment should aim at positive results for society that cannot be procured in some other way.\u201d24 The legal principle of proportionality (which pervades laws in general, and is particularly clearly adaptable to current purposes when a knowledge-based activity interferes with the fundamental right to data protection) says that the measure should be necessary to reach the goal. Similarly, in data protection principles and laws such as the GDPR, data minimization (collecting and using as little personal data as possible for the task at hand) is a guiding principle. Finally, arguably minimalism is considered a scienti\ufb01c virtue, expressed by general principles such as Occam\u2019s razor down to speci\ufb01c topics such as zero-knowledge proofs. 4.5 Q4: What are important side effects and dynamics? Although computer science arose from cybernetics, the study of systems and feedback loops, much of today\u2019s computer science rests on surprisingly linear and short-term cause-e\ufb00ect relationships. This is probably due to that other basic principle of the natural and engineering sciences: divide and conquer, that is, split problems into parts and address these separately. However, side e\ufb00ects and dynamics of applications are becoming more visible. Again, these may negatively impact the overall e\ufb00ect of AI systems and thereby reduce, annihilate, or even reverse positive e\ufb00ects on the Common Good. I will illustrate these with some examples from our example domain. First, by any version of \u201cthe drug problem\u201d relating to people and their behavior, inevitably many personal data will be collected and processed. This raises data protection issues that could well outweigh any positive bene\ufb01ts. As an example, consider social network mining, which is currently a popular method also for studying drug usage, see for example [59]. In the event of a data leak27, such social network mining methods could be used to derive inferences about individuals\u2019 drug-related behaviors or propensities that may damage reputations and a\ufb00ect lives. Alternatively, social-media users could receive targeted advertising based on their supposed propensities, and vulnerabilities could be exploited. Importantly, the question is not so much whether the mining methods return true knowledge, or whether the validity of the targeted advertising can be demonstrated: the attempt at manipulation itself may present the problem. This is a lesson learned from the history of the Cambridge Analytica case, in which ideas from an academic project in which social media were mined to predict personality [60] were later supposedly used for psychometrically micro-targeted election advertising [61]. Second, it is by now well-known that \u201cobjective\u201d big data analyses are likely to reproduce the biases in the data they learn from (thus violating the right to non-discrimination) [62]. This has been argued for a wide range of big-data applications [63] and shown with simulations for example for drug patrols [39]: A predictive-policing application learns from past data that arrests have occurred frequently in certain areas, and it proposes that police patrol these areas preferentially. This leads to more arrests in these areas, which in turn feeds the learning to propose to patrol them, etc. In general, since the deployment of bigdata analyses will itself create data that then become input to further data analyses, this can easily create vicious-cycle phenomena that have been observed by sociologists for long, dynamics that can perpetuate or even aggravate bias and discrimination [64]. A promising research direction for breaking such feed\fAI for the Common Good?! 15 Engineering Dealing with social problems problem solving approach Goal function well-de\ufb01ned often involves unresolvable ethical dilemmas, continued re-negotiation Method can be black box requires fairness, transparency, accountability Decomposition modular usually interdependent Delegation can be fully delegated (at least some) participation required Ethics codes and assessments canonical, starting points25, guidelines26 often a burdensome afterthought Solvability Problems can be solved. Some problems can only be addressed. Table 1: Solving engineering problems vs. dealing with social problems back loops, drawing on reinforcement learning, proposes a di\ufb00erent strategy for patrolling that could help to also detect the cases in the (initially) less likely areas [65]. It will be interesting to see how such strategies can be put into practice, and what e\ufb00ects this will have. Third, self-reinforcing feedback loops can occur not only at the level of data, but also regarding technology use. Problematic drug usage is a form of addictive behavior. But if someone decides to control their drug use through an app, this can contribute to addictive forms of Internet usage, and the question arises whether this substitution is sensible. Vice versa, learning to use an app ecosystem in a way that is conducive to one\u2019s well-being, could help overcome substance abuse. It is an open question which factors contribute to these dynamics playing out in vicious or virtuous feedback loops. 4.6 How a Solutionism mindset may hinder the asking of these questions \u201cSolutionism\u201d is a term coined by Morozov [66]. One of its de\ufb01nitions is \u201cthe belief that all di\ufb03culties have benign (usually technological) solutions\u201d. It can be regarded as an outcome of the problem-solving mindset described above, but other issues described under Q1-Q4 also play a role. In their article on lessons learned from successful DSSG projects, Tanweer and Fiore-Gartland stress the critical importance of expertise on context, project partner organizational culture, and multiple stakeholder perspectives. They conclude that \u201cexposing an inequity or proposing a solution to a social problem doesn\u2019t necessarily mean that social good will follow. If we ignore that warning, we are in danger of lapsing into technological solutionism (Morozov, 2013), where we propose datainformed solutions that have little chance of actually making a di\ufb00erence because they are contextually misconstrued, organizationally untenable, or socially unacceptable.\u201d [17, p. 3] Table 1 juxtaposes relevant aspects of engineering and social-science mindsets. Solutionism, in this table, is the assumption that a social problem (which usually resides on the right-hand side) is a problem on the left-hand side, coupled with the associated treatment of this problem. The point is not to declare the problem solving approach as useless for AI striving for the Common Good \u2013 on the contrary, its clarity and explicitness can often prove highly bene\ufb01cial for method and system development. Also, AI researchers and practitioners increasingly work in interdisciplinary teams, and sometimes also draw on skilled decision analysts who are much more alert to the complexities of decision making. However, the temptation to consider problems \u201csolved\u201d by a technological \u201csolution\u201d remains strong, and it can stand in the way of seeing and addressing the wider social issues. \fAI for the Common Good?! 16 5 Is the need to ask these questions not obvious? An exploration of current publications Some readers may concur that Q1-Q4 are important, but ask: Is it not the case that all computer scientists and AI researchers and practitioners know how important problem formulation is? Is it not the case that they are all aware of the central role of stakeholders, and di\ufb00erent stakeholders, in software requirements engineering? Is it not the case that AI researchers and practitioners know about limitations of knowledge, and that computer scientists, coming from a \ufb01eld that has its roots in cybernetics, are aware of systems and their dynamics? To get a \ufb01rst indication as to how AI in the interest of the Common Good deals with these aspects, I turned to four major venues for AI / DS for (Social) Good. The reasons for this choice are the same as for their choice as providing de\ufb01nitions, explained in Section 2.3. In these conferences, a large number of impressive methods and projects were described. The following analysis in no way intends to downplay these approaches\u2019 positive contributions, and caveats with regards to the study\u2019s results (which may derive from the conferences\u2019 goal being \u201cSocial Good\u201d rather than \u201cthe Common Good\u201d) will be described in the discussion in Section 5.3. 5.1 Surveyed materials I consulted all hyperlinked contributions (articles, extended abstracts, and presentations) that are made available on the websites of the four venue\u2019s most current editions or in their published proceedings. This procedure gave rise to 24 extended abstracts or articles from the Data Science for Social Good Conference 201728, 4 articles from SoGood 2017 [67], 15 articles from the 2017 AAAI Spring Symposium on AI for Social Good [68], and 56 presentation slide sets, extended abstracts or articles from the 2nd AI for Good Global Summit29. The selection contained all hyperlinked contributions to the second, third and fourth of these venues, since they all presented projects or methods (introductory greetings and other organizational materials were not considered further). The \ufb01rst venue, the Data Science for Social Good Conference, had multiple tracks, of which three appeared pertinent to the present questions and were therefore analyzed: DSSG Fellowship Project Talks, Short Talks: Research Challenges in Doing Data Science for Social Good and Short Talks: Collaboration Models for Social Good. All contributions were read and assessed with regard to content and whether they contained explicit information relating to the four lead questions above. 5.2 Summary of the \ufb01ndings 5.2.1 Content The contributions covered a wide range of issues, with no speci\ufb01c issue covered by more than one article. The motivation was usually framed as a problem to be solved; in some cases, a general social-good goal was named or could easily be inferred from the introduction and the speci\ufb01c computational goal. The issues were, in the large majority of cases, of a substantive nature. To the best of my knowledge, no generally agreed-upon ontology of Social Good objectives exists; I have therefore performed a rough classi\ufb01cation by the SDG goals that were chosen as the guiding principle of the AI for Good Global Summit, and will report only the most frequent ones. I will in some cases distinguish between contributions to the \ufb01rst three venues and those to the fourth, for two reasons: the Summit contributions were mostly slidesets that presented little method detail (such that counts of some methodological questions not being covered may be misleading); and the Summit was structured into four thematic tracks (such that counts of topics in this conference follow from this structure, which is not the case in the other conferences). Sustainable Development Goal (SDG) #3, Good Health and Well-being, was the most frequent: Eight out of the 43 contributions to the \ufb01rst 3 venues covered topics related to this SDG. The topic was also covered in eleven of the contributions to the fourth venue, in which AI + Health: Arti\ufb01cial Intelligence \u2013 a game changer for Universal Health Coverage? was one of the four tracks into which the conference was \fAI for the Common Good?! 17 structured. Five contributions in venues 1-3 could be linked to SDG #11, Sustainable Cities and Communities, and two to transportation (which concerns both cities and SDG #9, Industry, Innovation and Infrastructure). In addition, a second track in the AI for Global Good Summit with 13 contributions focused on Smart Cities. The AI for Global Good Summit dedicated a third track, with 6 contributions, on AI and Satellite Imagery and linked this methodcentric topic speci\ufb01cation for contributions to three further SDGs (No Poverty, Life on Land, and Zero Hunger), leading to a strong representation of these topics. Only a minority of the contributions focused on procedural issues. Of these, \ufb01ve were concerned with the scienti\ufb01c process as such, such as crowdsourcing a health-related task to citizen scientists. These contributions already start from the assumption that the goal of the respective scienti\ufb01c project is a legitimate goal for the social/common good (\u201cthe aim is not to promote user interaction but to collect useful data for their scienti\ufb01c goals\u201d [69]). The fourth track of the AI for Good Global Summit, Trust in AI, in its descriptions and contributions likewise considers the goodness of AI as a given and the need to build trust as a way to convince people of this. Four contributions made proposals for the processes of working towards the Social Good, ranging from DSSG projects via non-pro\ufb01ts to UN agencies tasked with identifying SDGs in national development plans. Three contributions dealt explicitly with democratic processes (in which citizens deliberate about their visions on the Common Good): one presenting a case study platform to create a democratic city planning system [70], one presenting a case study platform to help make city growth equitable by increasing transparency and accountability [71], one proposing an agent-based architecture to predict the e\ufb00ects of policies [72], and one presenting a mathematical voting model [73]. Another goal that could be linked to processes was to improve information access/di\ufb00usion and quality (\ufb01ve contributions): the proposed analysis methods for summarizing news and social media contents and identifying misinformation, can arguably help citizens make better-informed democratic choices. 5.2.2 Q1: Alternative goals? The vast majority of contributions worked with one (sometimes vaguely described or even only implicit) social goal and one computational goal. Thus, there was in general no attempt at framing the social problem in di\ufb00erent ways (see Q1 above), and no discussion of whether and how the selected social goal could translate into di\ufb00erent computational goals. The difference between the two types of problems (Q1\u2019) was not the topic of any paper. 5.2.3 Q2: Di\ufb00erent stakeholders, and a description of how their perspectives and needs were assessed? The large majority of contributions took their problem de\ufb01nition from the non-academic project partner, which was usually also the data provider (for example, a city council, a health agency, or a transportation agency). In some cases, several partners were mentioned (e.g., a tourism agency and a transportation agency), but no con\ufb02icts of goals or problems were reported. Tanweer and Fiore-Gartland, in their report on best practices, mentions stakeholders repeatedly, di\ufb00erentiating between \u201cpartner organizations\u201d and \u201ca\ufb00ected communities\u201d: \u201cDSSG projects can be more e\ufb00ective when done with consideration for the structures and cultures of partner organizations. This knowledge is often tacit for those stakeholders [. . . ] but without it, DSSG teams run the risk of developing products and services that have little chance of being embraced by stakeholders [. . . ] DSSG teams need to view social issues from multiple perspectives, realizing that di\ufb00erent communities and interest groups have [di\ufb00erent and] sometimes con\ufb02icting stakes in the way social problems are portrayed and addressed. Without understanding the complex political landscapes and contested histories within which social problems are enmeshed, they run the risk of alienating a\ufb00ected communities\u201d [17, p. 3]. Only one contribution described an explicit multistakeholder process that was used to formulate the computational/engineering problem [74], and another one mentions that their project partners fol\fAI for the Common Good?! 18 lowed such a process, in which there are clearly visible di\ufb00erent positions [71]. An agent-based architecture to re\ufb02ect di\ufb00erent positions and interests was proposed in [72], but the question of how to elicit these positions and interests was left implicit in this paper. In two contributions, di\ufb00erences between stakeholders\u2019 interests are identi\ufb01ed, but only one position is then pursued in the method or tool [75], or the problem is delegated by proposing that the owner of the AI device chooses the position that the machine will follow [76]. Two contributions dealing with questions of fairness take the existence of di\ufb00erent viewpoints of what \u201cfair\u201d means as the starting points of their formal models [77, 78]. The contributions dealing with democratic processes, especially [70], implicitly acknowledge di\ufb00erent viewpoints, but do not provide any speci\ufb01cs. 5.2.4 Q3: The role of knowledge It is di\ufb03cult to describe the breadth and depth of knowledge that was brought to the contributions, since that would require an in-depth understanding of all the domains of all the papers, or at least a validated bibliometric method. Both of these are beyond the possibilities of the current article. However, it can be observed that the setup of the venues strongly encourages the participation by AI researchers and practitioners, if only because the venues are de\ufb01ned in a discipline-centric way (\u201cAI for ...\u201d, \u201cData Science for ...\u201d). This limits the incentives for people from other \ufb01elds to participate. Concerning the role of knowledge as an output of the contribution, the texts give clearer indications. First, data science methods were not only the subject of the \u201cData Science for ...\u201d venue, but also of many contributions to the \u201cAI for ...\u201d venues (solely or, for example, in combination with computer vision in the AI and Satellite Imagery Track of the AI for Good Global Summit). This leads to a strong representation of knowledge-centric methods. Second, nevertheless, eleven contributions are coupled with an explicitly identi\ufb01ed and speci\ufb01c intervention, such as apps designed to detect health problems [79] or apps to incentivize people to cycle to work [80]. A further contribution mentions that the project partner intends to use the developed tool for a number of speci\ufb01ed purposes [81]. Two contributions [82, 17] describe collaboration processes and thereby go beyond knowledge. For several contributions in the AI for Good Global Summit, it was di\ufb03cult to see from the slide-set presentation what roles knowledge and interventions played. Thus, the numbers given here are likely to be a lower bound on those contributions that went beyond knowledge. 5.2.5 Q4: Dynamics Auerbach et al. [71] interleave data analysis and policy intervention and build a tool to satisfy various information needs in this process. They include a discussion on how insights from their data analysis and possible actions based on these insights could interact in the future and what this would imply for applicants and their use of the \ufb01nancing instrument they study. In the studied sample of contributions, this was the only one that included an explicit consideration of possible dynamics. 5.3 Discussion The results indicate that so far, the considerations presented in the current paper are not an integral part of current operational research practices in AI/DS for (Social) Good. On the other hand, the importance of Q1 and Q2 is stressed in many methodological papers in the analyzed publications sample. A question akin to Q3 is also highlighted from within the DSSG community: the importance of broader, and experiential, knowledge. Q4 is, at the moment, mostly re\ufb02ected in the fairness/non-discrimination literature. On the other hand, the mutual dependencies between technology and society are an integral part of the literature on socio-technical systems. More interaction with this \ufb01eld could bene\ufb01t future AI and data studies [13]. So the community \ufb01nds a stronger re\ufb02ection of process relevant, but also \ufb01nds it hard to translate these into concrete research practices. Some caveats and encouraging recent developments should be taken into account when interpreting these \ufb01ndings. \fAI for the Common Good?! 19 Q1 and Q2 were weakly represented. One reason for this may be that the requirements on Q1 and Q2 are likely to be less stringent for Social Good than for the Common Good. It appears from the de\ufb01nitions put forward by this community (see Section 2.3) as well as the conference survey (see Section 5.2) that Social Good may well be produced by considering only or mainly one, potentially very speci\ufb01c, stakeholder group (such as poultry farmers in Africa [83] or citizens registering as unemployed in one city [84]). Developing AI for these groups and/or relevant use cases involving them may still require researchers to consider various perspectives and problem versions, but the scope is much more limited than the \u201cfor all\u201d (members of a given community) of the Common Good. According to some de\ufb01nitions of the Social Good, it also appears legitimate to outsource the (social) problem de\ufb01nition to, for example, an NGO or government agency. The weak representation of Q1 and Q2 may also be a consequence of research practices. When researchers depend on the collaboration of a project partner (for example, to have access to data or to stakeholders), they may face di\ufb03culties if they conceptualize the problem in a way that contradicts the project partner\u2019s notion. This expectation may discourage them from exploring other conceptualizations of the problem. Regarding the breadth of knowledge brought to the research process (one aspect of Q3), a conference not surveyed here made an interesting decision: the \u201cFairness, Accountability and Transparency in Machine Learning\u201d (FATML) workshop organizers decided to host, as of 2018, a conference called FAT* and to turn FATML into one sub-event. This decision contributed to a more multidisciplinary perspective on fairness, accountability and transparency than in the years before, with contributions drawing on a more diverse set of stakeholders and problem formulations. As a member of the Steering Committee of FAT*, I am probably biased to see this conference as a success, but the case shows that this widening of scope is possible and can be highly successful in terms of the number and quality of submissions and attendance rates. The method of the conference survey has limitations. The coding exercise was, by design, exploratory, and the method simple. In future work, a codebook and more coders will be employed. In addition, the results of the coding exercise also suggest that the widespread absence of the considerations Q1Q4 in the publications may be (partially) an artifact of publication conventions that favor unambiguity and the appearance of a linear and smooth research process. In addition, many of the surveyed documents were very short and therefore concentrated on telling a simple story; a longer paper may have given room to alternatives considered and other details of the research process. As a result, I expect that qualitative interviews with project participants may yield more information. 6" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file