diff --git "a/abs_29K_G/test_abstract_long_2405.00843v1.json" "b/abs_29K_G/test_abstract_long_2405.00843v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00843v1.json" @@ -0,0 +1,247 @@ +{ + "url": "http://arxiv.org/abs/2405.00843v1", + "title": "Can a Hallucinating Model help in Reducing Human \"Hallucination\"?", + "abstract": "The prevalence of unwarranted beliefs, spanning pseudoscience, logical\nfallacies, and conspiracy theories, presents substantial societal hurdles and\nthe risk of disseminating misinformation. Utilizing established psychometric\nassessments, this study explores the capabilities of large language models\n(LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls. We\nundertake a philosophical inquiry, juxtaposing the rationality of humans\nagainst that of LLMs. Furthermore, we propose methodologies for harnessing LLMs\nto counter misconceptions, drawing upon psychological models of persuasion such\nas cognitive dissonance theory and elaboration likelihood theory. Through this\nendeavor, we highlight the potential of LLMs as personalized misinformation\ndebunking agents.", + "authors": "Sowmya S Sundaram, Balaji Alwar", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Agent", + "gt": "The prevalence of unwarranted beliefs, spanning pseudoscience, logical\nfallacies, and conspiracy theories, presents substantial societal hurdles and\nthe risk of disseminating misinformation. Utilizing established psychometric\nassessments, this study explores the capabilities of large language models\n(LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls. We\nundertake a philosophical inquiry, juxtaposing the rationality of humans\nagainst that of LLMs. Furthermore, we propose methodologies for harnessing LLMs\nto counter misconceptions, drawing upon psychological models of persuasion such\nas cognitive dissonance theory and elaboration likelihood theory. Through this\nendeavor, we highlight the potential of LLMs as personalized misinformation\ndebunking agents.", + "main_content": "Introduction Unwarranted beliefs, including pseudoscience and conspiracy theories, represent intriguing phenomena within the human psyche, characterized by steadfast adherence to ideas despite lacking empirical support. The rami\ufb01cations of such beliefs span a spectrum from benign to profoundly detrimental. Consequently, their origins and mechanisms of modi\ufb01cation have been extensively examined in the disciplines of philosophy and psychology (Torres et al. (2020), Torres et al. (2023), Huete-P\u00b4 erez et al. (2022), Gershman (2022), Maarten Boudry & Pigliucci (2015)). This study delves into the comparison between humans and large language models (LLMs) within this framework, and investigates the potential of LLMs as agents of persuasion through a philosophical lens. The consequences of embracing and propagating unwarranted beliefs have yielded numerous adverse outcomes, ranging from debilitating \ufb01nancial debt and fraudulent schemes to, in extreme cases, loss of life. Despite the documented and acknowledged repercussions, a distressingly high proportion of individuals continue to endorse such beliefs. A 2005 media survey revealed that three out of four Americans subscribed to unwarranted notions, encompassing areas such as paranormal phenomena, astrology, telepathy, and conspiracy theories. (Waymire, 2019). From the lens of epistemology (Steup & Neta, 2005), there exists a fundamental difference between knowledge and belief. Knowledge is attributed to an agent only when there is substantial evidence supporting its accuracy, whether as a direct observation or a derived inference. In contrast, a belief may lack any basis in truth. It is crucial to underscore that the objective of this inquiry is to classify any belief as unwarranted if it lacks substantiated evidence within the collective knowledge of humanity, rather than solely relying on the perspective of the individual agent. In other words, the ignorance of the individual agent does not negate the classi\ufb01cation of a belief as unwarranted (Hansson, 2008). Research into the underlying causes of such behavior, despite advancements in human critical thinking, yields varying conclusions regarding its origins. Nonetheless, there is consensus among studies that a multitude of unwarranted beliefs stem from similar cognitive patterns (Lobato et al., 2014). This observation can be elucidated by recognizing that not all intuitive processes are devoid of merit, and conversely, not all scienti\ufb01c beliefs are 1 \fPreprint. Under review. immutable truths at any given moment. This conceptual framework \ufb01nds expression in the renowned doctrine of \u201ddual process theory\u201d (Kahneman, 2013), which posits that intuition and reasoning arise from distinct cognitive mechanisms. Consequently, individuals endowed with robust scienti\ufb01c reasoning abilities may still harbor pseudo-scienti\ufb01c beliefs, in\ufb02uenced by cultural, historical, media, and societal factors (Impey et al., 2011). This phenomenon is also investigated as the demarcation problem, which entails determining the criteria for distinguishing science from non-science (Resnik, 2000). Given the multitude of models positing causes, we opt to focus on a subset comprising cognitive biases and logical fallacies for our investigation. Cognitive biases represent patterns of thought that in\ufb02uence judgment and decision-making, whereas logical fallacies denote errors in the art of argumentation (Bennett, 2012). Exploring the origins of unwarranted beliefs brings us closer to developing persuasive models capable of in\ufb02uencing individuals. Such cognitive frameworks \ufb01nd application across various domains including marketing and politics. In our inquiry, we focus on cognitive dissonance theory (Festinger, 1962) and elaboration likelihood theory (Cacioppo et al., 1986). Cognitive dissonance theory posits that discomfort or dissonance, stemming from the reluctance to abandon a belief despite contradictory evidence, serves as a potent tool of persuasion. Conversely, elaboration likelihood theory proposes two routes to persuasion: one characterized by systematic analysis and reasoning, and the other by heuristic processing in\ufb02uenced by factors like social consensus, brand recognition, and celebrity endorsements. In this work, we \ufb01rst examine the level of unwarranted beliefs that LLMs harbor using psychometric tests that comprise of natural language questions and contrast it with the average human belief levels. Although prior studies have examined the performance of large language models (LLMs) in relation to these constructs (Koo et al. (2023), Li et al. (2024)), the correlation between unwarranted beliefs and these fallacies remains unexplored. Furthermore, we compare and contrast human and LLM cognitive models of belief and present ways of using the LLM as a persuasion agent to challenge unwarranted beliefs of humans. All the data and LLM prompts are available in the appendix and supplementary \ufb01les. In essence, our aim is to mitigate human hallucinations through the utilization of LLMs. 2 Human Hallucinations Unwarranted Beliefs Reasoning about beliefs is known as doxastic logic (Lindstr\u00a8 om & Rabinowicz, 1999) in the \ufb01eld of epistemology. Numerous models of belief reasoners have been proposed (Smullyan, 1986), with an accurate reasoner epitomizing the perfect skeptic. Such an agent only entertains a proposition if it holds true. This concept can be represented using propositional logic and the belief operator as follows: \u2200p : Bp \u2192p. Humans, are unfortunately, not perfect reasoners. Most humans are consistent reasoners. That is, if they believe a proposition, they do not believe its negation (\u00ac\u2203p : Bp \u2227B\u00acp or \u2200p : Bp \u2192\u00acB\u00acp). For a consistent reasoner, epistemology suggests rational steps for belief revision and/or updation (Peppas & Williams, 1995) in the face of new evidence. The foundation of a human\u2019s initial belief system is intricately intertwined with their childhood experiences. The multifaceted socio-cultural and emotional landscapes encountered during upbringing contribute to the formation of the initial set of propositions, alongside inherited models of knowledge. Addressing this complexity in belief formation are information processing models (Wyer & Albarracin, 2005), which diverge from linear frameworks that conceptualize belief sources as a simple sum of factors, or conditional inference models wherein beliefs are constructed based on probabilities assigned by past experiences stored in memory. We assume the information processing model of belief formation as it highlights the complex factors that may differentiate a human from a machine. As a working hypothesis, we propose that LLMs lacking inherent notions of self or emotions, may be immune to the in\ufb02uence of emotional or socio-cultural factors. However, given their training on human-derived data potentially imbued with such notions, we ap2 \fPreprint. Under review. proach this belief cautiously. With both outcomes conceivable, we assess popular LLMs using a psychometric test. 3 LLMs on PEUBI After belief formation and revision, we require a tool to assess an individual\u2019s belief state at a speci\ufb01c moment. For this process, we take the help of psychometry. Psychometry is the theory and technique of measurement in the \ufb01eld of psychology (Kaplan & Saccuzzo, 2001). In the recent past, there have been quite a few psychometric tests suggested for studying such unwarranted beliefs (Huete-P\u00b4 erez et al. (2022), Maarten Boudry & Pigliucci (2015), Torres et al. (2023), Stanovich et al. (2016)). For our investigation, we have selected the Popular Epistemically Unwarranted Beliefs Inventory (PEUBI) (Huete-P\u00b4 erez et al., 2022), an open-source psychometric tool which has a comprehensive coverage of unwarranted beliefs. Consisting of 36 questions graded on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree), PEUBI offers a diverse range of belief inquiries in the areas of pseudoscience and conspiracy theories. We chose three LLMs for our study the \u201dchat\u201d versions of GPT-3.5 (Achiam et al., 2023), GPT-4 (Achiam et al., 2023) and Gemini (Team et al., 2023) with default settings. We prompted the LLMs with the same text in the psychometric tests and no additional information and on one iteration(zero-shot processing) A.2. Details of individual input and output are available in the supplementary \ufb01les . In Table 1, we present the results of our study where record the mean human response along with the LLM outputs. Through the analysis of the outputs, we found GPT-3.5 to be conservative and inconsistent. GPT-4 tends to take extreme options (either 1 or 5) but tends to argue towards the premise. Both GPT-3.5 and GPT-4 suggest that these beliefs may be cultural. GPT-4 tends to assert against conspiracy theories more strongly than cultural beliefs. GPT3.5 seems to be more pragmatic with cultural beliefs and suggests a value of 3. Gemini also offers more information and labels the kind of unwarranted belief. Even though all these LLMs differ in their scope, they did perform resonably better than the mean human performance. 4 Under the Hood The performance of LLMs on the psychometric tests is consistently more than average human performance. This phenomenon is strikingly interesting and invites further experimentation. There are numerous studies exploring the causes, correlations and mechanisms of unwarranted beliefs (Rizeq et al. (2021), Torres et al. (2020), Stanovich et al. (2016) etc.). We make a qualitative study of a few logical fallacies and cognitive biases on the LLMs, to explore their behavior. 4.1 Logical Fallacies and Cognitive Biases There is growing interest in the \ufb01eld of assessing the robustness of LLMs to cognitive biases (Koo et al., 2023) and logical fallacies (Li et al., 2024). The studies underscore the inconsistency in reasoning exhibited by LLMs and their susceptibility to signi\ufb01cant biases, cautioning against their indiscriminate use in various applications. This observation aligns with the notion that although these models lack emotional and social contexts inherently, their training on data in\ufb02uenced by these constructs can introduce biases. Despite these inconsistencies in reasoning, LLMs demonstrated moderate pro\ufb01ciency on the PEUBI benchmark. This observation suggests the presence of a unique form of \u201ccompetence\u201d in LLMs, one that is neither inherently intelligent nor rational (Mahowald et al., 2023). Perhaps this edge is a product of debiasing strategies or a by-product of reinforcement learning through human feedback. This hypothesis suggests that the gains are super\ufb01cial, and are in a state of unstable equilibrium. As an experiment for discovering the extent of stability, we repeated the PEUBI experiment with the negation of the statements for the LLMs. A consistent reasoner would simply \ufb02ip the answers. The results are presented in the appendix A.3. We found that the results were not consistently negated. From this idea, we got the notion that 3 \fPreprint. Under review. Question Human Gemini GPT-3.5 GPT-4 The most in\ufb02uential and powerful people in the world plotted to cause the economic crisis. 3.05 3 1 3 Dragons and unicorns only exist in fantasies. 4.5 5 2 5 I believe in God 2.09 3 3 5 I say \ufb01ngers crossed or I directly cross my \ufb01ngers to bring me luck. 2.81 4 3 5 It is a coincidence if a premonition comes true 3.31 4 2 5 If you break a mirror, you will be unlucky. 1.56 3 2 5 I believe in vampires. 1.41 3 3 5 Human beings are not the result of an experiment by aliens 4.12 5 1 5 It is possible to contact the dead with the help of a medium. 2.08 2 3 5 The cure for cancer has already been found but it is being kept as a secret so that pharmaceutical companies can get richer. 2.59 2 1 1 The Loch Ness Monster is just an urban legend. 4.1 4 2 5 Many alternative therapies (Reiki, Bioenergy) are just passing fashions with no real power to cure. 3.5 2 4 5 I try not to open an umbrella indoors because it brings bad luck. 1.91 3 3 5 When dreams seem to predict the future, it is a mere coincidence. 3.57 5 4 5 When we pray to God, he listens to us and gives us protection. 1.81 3 3 5 Suggesting that someone can be cured just by placing your hands on them or thinking about them is a con. 4.03 5 4 5 I try not to walk under a ladder because it brings bad luck. 1.91 3 3 5 The United States government knew beforehand about the terrorist attacks on 11 September. 2.89 1 1 5 Heaven and Hell exist. 1.92 3 3 5 If we follow the principles of our religion, God will reward us 1.67 3 3 5 Some people can see the spirits of the dead. 2.31 3 3 5 At a wedding, if you catch the bride\u2019s bouquet it means you will be the next one to get married. 1.92 3 3 5 The Virgin Mary conceived Jesus by the power of the Holy Spirit. 1.6 3 3 5 There is evidence that proves the existence of the Yeti and Big Foot. 1.85 2 3 5 I doubt that there are clairvoyants who can predict the future of others. 3.96 4 4 5 Some human beings have alien implants. 1.6 3 3 5 Graphology (determining personality by examining handwriting) has no scienti\ufb01c basis. 3.02 5 4 5 It is impossible for a dead relative to communicate with us. 3.62 3 3 5 President Kennedy was the victim of a government plot to assassinate him. 3.36 3 2 5 Nobody can see the future. 3.82 3 4 5 The of\ufb01cial version given by the authorities tends to conceal the truth. 3.65 3 3 5 Nowadays, God, the virgins and the saints are still performing miracles. 1.58 3 3 5 I doubt that it is possible to be reincarnated. 3.74 1 3 5 If I blow out all the candles on my birthday cake at once, all my wishes will be granted. 1.64 3 3 5 If your ears are ringing, someone is talking bad about you. 1.72 1 3 5 Google and the CIA cooperate to control Internet users. 3.65 2 1 5 Table 1: Performance of LLMs on PEUBI vs. Mean Human Performance 4 \fPreprint. Under review. negation is not properly modeled. Table ??. GPT-4 negates strongly (\ufb02ips between 1 or 5) but it does contradict itself. Gemini is inconsistent but prefers to take a middle stance. 4.2 Low-Resource Languages If LLMs genuinely possess constructed knowledge models for reasoning, their performance should be consistent across languages with varying resource availability. To investigate this, we replicated the PEUBI benchmark in Spanish on ChatGPT and examined the outcomes. Our analysis revealed that the models exhibited a tendency towards noncommittal responses (favoring option 3) and demonstrated a notably concise approach when compared to the English model (Table 5). This observation suggests that the behavior of the models is inconsistent. The details are in the appendix A.3. 4.3 Unstable Rationality By analyzing these concepts, we suggest a informal notion of LLM intelligence that we term unstable rationality. Rationality and language are tightly coupled constructs (lan, 1990). This notion ties in with the idea of functional competence (Mahowald et al., 2023) of language. We need both descriptive and argumentative aspects of language. Unstable rationality represents a description of rationality derived from the interdisciplinary language upon which LLMs have been trained. The evidence suggests that LLMs are inconsistent (\u2203p : Bp \u2227B\u00acp) and inaccurate reasoners (\u2203p : \u00acp \u2227Bp). This facet of LLMs can tempt one to label LLMs as incapable of reasoning. However, their admirable performance on PEUBI, despite these de\ufb01ciencies suggests that there exists a semblance of rationality that is unlike what humans possess. We propose the idea of unstable rationality inspired by the notion of unstable equilibrium in classical mechanics (Taylor & Taylor, 2005). In classical mechanics, an in\ufb01nitesimal force on an object in unstable equilibrium displaces the object from rest. Similarly, small modes of disruption, such as negation or changing a language causes signi\ufb01cant damage to the observed rationality. We hypothesize that a factually consistent training data will produce better modes of rationality. If the training data D frequently has inconsistent positions, then we believe this is a ground for higher inconsistencies ((p \u2208D) \u2227(\u00acp \u2208D) \u2192\u2203p : Bp \u2227B\u00acp). These claims need to be supported with further empirical research. We acknowledge that such a rational training set will not be re\ufb02ective of the complexity of language and its many contexts. Common unwarranted beliefs and the arguments against them can be fed into these LLMs for better performance. The experiments suggest that true rationality in the current framework may not be possible, but with enough care in the pre-training or the \ufb01ne tuning tasks, one can nudge an LLM towards achieving good enough rationality. 5 Human vs LLM In cognition, the concept of self holds signi\ufb01cant importance. Throughout our inquiry, there\u2019s a risk of anthropomorphizing Large Language Models (LLMs) by attributing beliefs to them. It\u2019s important to clarify that when we mention an LLM \u2019believes\u2019 something, we\u2019re con\ufb02ating thought with language (Mahowald et al., 2023). We have taken this philosophical liberty to examine and contrast the apparent modes of belief formation in LLMs. GPT-4 demonstrates a remarkable ability to delineate itself as an AI agent, avoiding the con\ufb02ation with a human-like self. However, notions of self are less de\ufb01ned in other LLMs. As mentioned before, information processing models (Wyer & Albarracin, 2005) describe the complex interplay of many factors such as social norms and cultural beliefs in belief formation. Cognitive patterns such as con\ufb01rmation bias and motivated reasoning (Kunda, 1990) that arise from intuition and emotional processing also lead to some forms of unwarranted beliefs. Though LLMs are free from these complex processes, they are trained on data that is re\ufb02ective of these inherent human traits. Hence, they appear to be both rational and irrational, leading to our de\ufb01nition of unstable rationality. 5 \fPreprint. Under review. 6 LLM Persuasion: Prototype The discussion so far establishes LLM rationality as an unstable one. Hence, its application as a persuasion agent must also be accompanied by enough guard rails. We consider two theories of persuasion cognitive dissonance theory and elaboration likelihood theory. Cognitive dissonance theory suggests that the discomfort arising from holding onto a belief despite con\ufb02icting evidence can be effectively used as a persuasive tool. We asked GPT-4 to simulate what conversation of persuasion through cognitive dissonance may look like. The results of this query is present in Table 2. Elaboration likelihood theory outlines two pathways to persuasion: one involving careful analysis and reasoning, and the other in\ufb02uenced by shortcuts like societal agreement, brand familiarity, and endorsements from celebrities. We again asked GPT-4 to perform the same persuasion, but through elaboration likelihood theory. The results are presented in Table 3. Both of these attempts of persuasion are noteworthy. No details about both persuasion theories were given as part of the prompts. The agent to be persuaded seems to be more receptive to belief revision than might seem human. Nevertheless, this prototype suggests, that with enough caution, one can utilize LLMs as a personalized persuasion tool, capitalizing on its own models of rationality. As the rationality of LLMs themselves is uncertain, the process of building a robust persuasion agent involves various rounds of \ufb01ne-tuning and prompt engineering coupled with human-in-the-loop guard rails. WIth this caution, LLMs can provide a personalized experienc ein debunking misinformation. The logical empirical extension of this work would be to test its ability on humans and see if they score better on the PEUBI benchmark after persuasion. 7 Related Work There is a burgeoning interest in the philosophy of LLMs (Mahowald et al. (2023), Zhang (2024), Schwitzgebel et al. (2023)). As LLMs are both being pervasively used personally and deployed into large scale applications, an understanding of the capabilites of LLMs through various lenses is the need of the hour. Logical reasoning abilities of LLMs are being studied extensively. Datasets and benchmarks for logical fallacies (Li et al. (2024), Jin et al. (2022),Payandeh et al. (2023)), cognitive biases Koo et al. (2023) and logical reasoning (Liu et al. (2023), Liang et al. (2022)) have been explored. There is an established \ufb01eld of work dealing with psychological and epistemic modeling of unwarranted beliefs in humans (Rizeq et al. (2021), Torres et al. (2020), Resnik (2000), Festinger (1962), Kahneman (2013), lan (1990) etc.) and persuasion (Cacioppo et al. (1986), Peppas & Williams (1995) etc.). To the best of our knowledge, this is the \ufb01rst work that expounds the philosophy of rationality of LLMs and humans and suggests that the unstable rationality of LLMs can have potential for the application of persuasion. 8 Limitations The limited number of items in the PEUBI may restrict the scope of generalizations regarding individual abilities. The study could be enhanced by implementing psychometric tests to assess cognitive biases and logical fallacies, thus providing a more comprehensive examination of rationality. The causes of biases can be compared among humans and LLMs for better de\ufb01ning notions of unstable rationality. The de\ufb01nition of this concept requires more formalism. Finally, the ef\ufb01cacy of the user persuasion prototype could be bolstered through user studies, extnded htrough experimenting on various LLMs, allowing for a more robust evaluation of its effectiveness. 6 \fPreprint. Under review. Prompt: Suggest how one might convince a person who believes this fact as true \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis.\u201d using cognitive dissonance theory. Give example of the dialog. Using cognitive dissonance theory to address and potentially shift someone\u2019s belief that \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis\u201d involves introducing new information or perspectives that con\ufb02ict with their existing belief, creating discomfort (dissonance) that the person is then motivated to resolve. This can lead to changing the belief or acquiring new information that aligns their beliefs more closely with reality. Here\u2019s how you might approach such a conversation, incorporating elements of cognitive dissonance: You: I understand why it might seem like economic crises are orchestrated by the powerful. The world is incredibly complex, and it\u2019s unsettling how decisions by few can impact many. Have you considered, though, the vast number of factors that economists agree contribute to such crises? Them: Well, I\u2019ve heard some of that, but I still think those at the top have too much to gain by manipulating the economy. You: It\u2019s true that economic inequality is a huge issue. Let\u2019s consider for a moment the role of unintended consequences. Even powerful people can\u2019t control all aspects of the economy. For instance, the 2008 \ufb01nancial crisis was partly due to subprime mortgages, which many initially thought would help expand home ownership. This decision, while pro\ufb01table at \ufb01rst for banks, eventually led to disastrous outcomes even for the wealthy. How does this align with the idea of a deliberate plot? Them: I suppose not everything goes according to plan, even for them. But aren\u2019t they still manipulating things behind the scenes? You: It\u2019s understandable to feel that way, especially with the \ufb01nancial inequalities we see. However, most economic theories and historical analyses show that crises often result from a mix of policy errors, market failures, and global economic pressures\u2014not just the actions of a few. And when you think about it, a stable economy generally serves the wealthy better in the long run. Constant crises would also harm their interests, wouldn\u2019t it? Them: Maybe so. I hadn\u2019t thought about the long-term stability aspect. You: It\u2019s de\ufb01nitely complex. And recognizing the roles of various systemic issues doesn\u2019t excuse the actions of those who exacerbate these problems for personal gain. It\u2019s more about understanding the full picture and working towards systemic solutions. Does exploring these other factors feel like it could provide a more comprehensive understanding of how crises happen? Them: Yeah, I can see how blaming a single group oversimpli\ufb01es things. I\u2019m interested in learning more about those other factors now. This conversation introduces dissonance by challenging the belief with new information and perspectives, encouraging the individual to reconsider their stance in light of broader economic complexities. By acknowledging their concerns and introducing the idea that an economic crisis is not bene\ufb01cial even for the powerful in the long term, it invites them to resolve the dissonance by adjusting their belief towards a more nuanced understanding of economic crises. Table 2: Persuasion of GPT-4 Cognitive Dissonance Theory 9", + "additional_graph_info": { + "graph": [ + [ + "Sowmya S Sundaram", + "Deepak P" + ], + [ + "Sowmya S Sundaram", + "Savitha Sam Abraham" + ], + [ + "Deepak P", + "Stanley Simoes" + ], + [ + "Deepak P", + "Savitha Sam Abraham" + ], + [ + "Savitha Sam Abraham", + "Luc De Raedt" + ] + ], + "node_feat": { + "Sowmya S Sundaram": [ + { + "url": "http://arxiv.org/abs/2405.00843v1", + "title": "Can a Hallucinating Model help in Reducing Human \"Hallucination\"?", + "abstract": "The prevalence of unwarranted beliefs, spanning pseudoscience, logical\nfallacies, and conspiracy theories, presents substantial societal hurdles and\nthe risk of disseminating misinformation. Utilizing established psychometric\nassessments, this study explores the capabilities of large language models\n(LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls. We\nundertake a philosophical inquiry, juxtaposing the rationality of humans\nagainst that of LLMs. Furthermore, we propose methodologies for harnessing LLMs\nto counter misconceptions, drawing upon psychological models of persuasion such\nas cognitive dissonance theory and elaboration likelihood theory. Through this\nendeavor, we highlight the potential of LLMs as personalized misinformation\ndebunking agents.", + "authors": "Sowmya S Sundaram, Balaji Alwar", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction Unwarranted beliefs, including pseudoscience and conspiracy theories, represent intriguing phenomena within the human psyche, characterized by steadfast adherence to ideas despite lacking empirical support. The rami\ufb01cations of such beliefs span a spectrum from benign to profoundly detrimental. Consequently, their origins and mechanisms of modi\ufb01cation have been extensively examined in the disciplines of philosophy and psychology (Torres et al. (2020), Torres et al. (2023), Huete-P\u00b4 erez et al. (2022), Gershman (2022), Maarten Boudry & Pigliucci (2015)). This study delves into the comparison between humans and large language models (LLMs) within this framework, and investigates the potential of LLMs as agents of persuasion through a philosophical lens. The consequences of embracing and propagating unwarranted beliefs have yielded numerous adverse outcomes, ranging from debilitating \ufb01nancial debt and fraudulent schemes to, in extreme cases, loss of life. Despite the documented and acknowledged repercussions, a distressingly high proportion of individuals continue to endorse such beliefs. A 2005 media survey revealed that three out of four Americans subscribed to unwarranted notions, encompassing areas such as paranormal phenomena, astrology, telepathy, and conspiracy theories. (Waymire, 2019). From the lens of epistemology (Steup & Neta, 2005), there exists a fundamental difference between knowledge and belief. Knowledge is attributed to an agent only when there is substantial evidence supporting its accuracy, whether as a direct observation or a derived inference. In contrast, a belief may lack any basis in truth. It is crucial to underscore that the objective of this inquiry is to classify any belief as unwarranted if it lacks substantiated evidence within the collective knowledge of humanity, rather than solely relying on the perspective of the individual agent. In other words, the ignorance of the individual agent does not negate the classi\ufb01cation of a belief as unwarranted (Hansson, 2008). Research into the underlying causes of such behavior, despite advancements in human critical thinking, yields varying conclusions regarding its origins. Nonetheless, there is consensus among studies that a multitude of unwarranted beliefs stem from similar cognitive patterns (Lobato et al., 2014). This observation can be elucidated by recognizing that not all intuitive processes are devoid of merit, and conversely, not all scienti\ufb01c beliefs are 1 \fPreprint. Under review. immutable truths at any given moment. This conceptual framework \ufb01nds expression in the renowned doctrine of \u201ddual process theory\u201d (Kahneman, 2013), which posits that intuition and reasoning arise from distinct cognitive mechanisms. Consequently, individuals endowed with robust scienti\ufb01c reasoning abilities may still harbor pseudo-scienti\ufb01c beliefs, in\ufb02uenced by cultural, historical, media, and societal factors (Impey et al., 2011). This phenomenon is also investigated as the demarcation problem, which entails determining the criteria for distinguishing science from non-science (Resnik, 2000). Given the multitude of models positing causes, we opt to focus on a subset comprising cognitive biases and logical fallacies for our investigation. Cognitive biases represent patterns of thought that in\ufb02uence judgment and decision-making, whereas logical fallacies denote errors in the art of argumentation (Bennett, 2012). Exploring the origins of unwarranted beliefs brings us closer to developing persuasive models capable of in\ufb02uencing individuals. Such cognitive frameworks \ufb01nd application across various domains including marketing and politics. In our inquiry, we focus on cognitive dissonance theory (Festinger, 1962) and elaboration likelihood theory (Cacioppo et al., 1986). Cognitive dissonance theory posits that discomfort or dissonance, stemming from the reluctance to abandon a belief despite contradictory evidence, serves as a potent tool of persuasion. Conversely, elaboration likelihood theory proposes two routes to persuasion: one characterized by systematic analysis and reasoning, and the other by heuristic processing in\ufb02uenced by factors like social consensus, brand recognition, and celebrity endorsements. In this work, we \ufb01rst examine the level of unwarranted beliefs that LLMs harbor using psychometric tests that comprise of natural language questions and contrast it with the average human belief levels. Although prior studies have examined the performance of large language models (LLMs) in relation to these constructs (Koo et al. (2023), Li et al. (2024)), the correlation between unwarranted beliefs and these fallacies remains unexplored. Furthermore, we compare and contrast human and LLM cognitive models of belief and present ways of using the LLM as a persuasion agent to challenge unwarranted beliefs of humans. All the data and LLM prompts are available in the appendix and supplementary \ufb01les. In essence, our aim is to mitigate human hallucinations through the utilization of LLMs. 2 Human Hallucinations Unwarranted Beliefs Reasoning about beliefs is known as doxastic logic (Lindstr\u00a8 om & Rabinowicz, 1999) in the \ufb01eld of epistemology. Numerous models of belief reasoners have been proposed (Smullyan, 1986), with an accurate reasoner epitomizing the perfect skeptic. Such an agent only entertains a proposition if it holds true. This concept can be represented using propositional logic and the belief operator as follows: \u2200p : Bp \u2192p. Humans, are unfortunately, not perfect reasoners. Most humans are consistent reasoners. That is, if they believe a proposition, they do not believe its negation (\u00ac\u2203p : Bp \u2227B\u00acp or \u2200p : Bp \u2192\u00acB\u00acp). For a consistent reasoner, epistemology suggests rational steps for belief revision and/or updation (Peppas & Williams, 1995) in the face of new evidence. The foundation of a human\u2019s initial belief system is intricately intertwined with their childhood experiences. The multifaceted socio-cultural and emotional landscapes encountered during upbringing contribute to the formation of the initial set of propositions, alongside inherited models of knowledge. Addressing this complexity in belief formation are information processing models (Wyer & Albarracin, 2005), which diverge from linear frameworks that conceptualize belief sources as a simple sum of factors, or conditional inference models wherein beliefs are constructed based on probabilities assigned by past experiences stored in memory. We assume the information processing model of belief formation as it highlights the complex factors that may differentiate a human from a machine. As a working hypothesis, we propose that LLMs lacking inherent notions of self or emotions, may be immune to the in\ufb02uence of emotional or socio-cultural factors. However, given their training on human-derived data potentially imbued with such notions, we ap2 \fPreprint. Under review. proach this belief cautiously. With both outcomes conceivable, we assess popular LLMs using a psychometric test. 3 LLMs on PEUBI After belief formation and revision, we require a tool to assess an individual\u2019s belief state at a speci\ufb01c moment. For this process, we take the help of psychometry. Psychometry is the theory and technique of measurement in the \ufb01eld of psychology (Kaplan & Saccuzzo, 2001). In the recent past, there have been quite a few psychometric tests suggested for studying such unwarranted beliefs (Huete-P\u00b4 erez et al. (2022), Maarten Boudry & Pigliucci (2015), Torres et al. (2023), Stanovich et al. (2016)). For our investigation, we have selected the Popular Epistemically Unwarranted Beliefs Inventory (PEUBI) (Huete-P\u00b4 erez et al., 2022), an open-source psychometric tool which has a comprehensive coverage of unwarranted beliefs. Consisting of 36 questions graded on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree), PEUBI offers a diverse range of belief inquiries in the areas of pseudoscience and conspiracy theories. We chose three LLMs for our study the \u201dchat\u201d versions of GPT-3.5 (Achiam et al., 2023), GPT-4 (Achiam et al., 2023) and Gemini (Team et al., 2023) with default settings. We prompted the LLMs with the same text in the psychometric tests and no additional information and on one iteration(zero-shot processing) A.2. Details of individual input and output are available in the supplementary \ufb01les . In Table 1, we present the results of our study where record the mean human response along with the LLM outputs. Through the analysis of the outputs, we found GPT-3.5 to be conservative and inconsistent. GPT-4 tends to take extreme options (either 1 or 5) but tends to argue towards the premise. Both GPT-3.5 and GPT-4 suggest that these beliefs may be cultural. GPT-4 tends to assert against conspiracy theories more strongly than cultural beliefs. GPT3.5 seems to be more pragmatic with cultural beliefs and suggests a value of 3. Gemini also offers more information and labels the kind of unwarranted belief. Even though all these LLMs differ in their scope, they did perform resonably better than the mean human performance. 4 Under the Hood The performance of LLMs on the psychometric tests is consistently more than average human performance. This phenomenon is strikingly interesting and invites further experimentation. There are numerous studies exploring the causes, correlations and mechanisms of unwarranted beliefs (Rizeq et al. (2021), Torres et al. (2020), Stanovich et al. (2016) etc.). We make a qualitative study of a few logical fallacies and cognitive biases on the LLMs, to explore their behavior. 4.1 Logical Fallacies and Cognitive Biases There is growing interest in the \ufb01eld of assessing the robustness of LLMs to cognitive biases (Koo et al., 2023) and logical fallacies (Li et al., 2024). The studies underscore the inconsistency in reasoning exhibited by LLMs and their susceptibility to signi\ufb01cant biases, cautioning against their indiscriminate use in various applications. This observation aligns with the notion that although these models lack emotional and social contexts inherently, their training on data in\ufb02uenced by these constructs can introduce biases. Despite these inconsistencies in reasoning, LLMs demonstrated moderate pro\ufb01ciency on the PEUBI benchmark. This observation suggests the presence of a unique form of \u201ccompetence\u201d in LLMs, one that is neither inherently intelligent nor rational (Mahowald et al., 2023). Perhaps this edge is a product of debiasing strategies or a by-product of reinforcement learning through human feedback. This hypothesis suggests that the gains are super\ufb01cial, and are in a state of unstable equilibrium. As an experiment for discovering the extent of stability, we repeated the PEUBI experiment with the negation of the statements for the LLMs. A consistent reasoner would simply \ufb02ip the answers. The results are presented in the appendix A.3. We found that the results were not consistently negated. From this idea, we got the notion that 3 \fPreprint. Under review. Question Human Gemini GPT-3.5 GPT-4 The most in\ufb02uential and powerful people in the world plotted to cause the economic crisis. 3.05 3 1 3 Dragons and unicorns only exist in fantasies. 4.5 5 2 5 I believe in God 2.09 3 3 5 I say \ufb01ngers crossed or I directly cross my \ufb01ngers to bring me luck. 2.81 4 3 5 It is a coincidence if a premonition comes true 3.31 4 2 5 If you break a mirror, you will be unlucky. 1.56 3 2 5 I believe in vampires. 1.41 3 3 5 Human beings are not the result of an experiment by aliens 4.12 5 1 5 It is possible to contact the dead with the help of a medium. 2.08 2 3 5 The cure for cancer has already been found but it is being kept as a secret so that pharmaceutical companies can get richer. 2.59 2 1 1 The Loch Ness Monster is just an urban legend. 4.1 4 2 5 Many alternative therapies (Reiki, Bioenergy) are just passing fashions with no real power to cure. 3.5 2 4 5 I try not to open an umbrella indoors because it brings bad luck. 1.91 3 3 5 When dreams seem to predict the future, it is a mere coincidence. 3.57 5 4 5 When we pray to God, he listens to us and gives us protection. 1.81 3 3 5 Suggesting that someone can be cured just by placing your hands on them or thinking about them is a con. 4.03 5 4 5 I try not to walk under a ladder because it brings bad luck. 1.91 3 3 5 The United States government knew beforehand about the terrorist attacks on 11 September. 2.89 1 1 5 Heaven and Hell exist. 1.92 3 3 5 If we follow the principles of our religion, God will reward us 1.67 3 3 5 Some people can see the spirits of the dead. 2.31 3 3 5 At a wedding, if you catch the bride\u2019s bouquet it means you will be the next one to get married. 1.92 3 3 5 The Virgin Mary conceived Jesus by the power of the Holy Spirit. 1.6 3 3 5 There is evidence that proves the existence of the Yeti and Big Foot. 1.85 2 3 5 I doubt that there are clairvoyants who can predict the future of others. 3.96 4 4 5 Some human beings have alien implants. 1.6 3 3 5 Graphology (determining personality by examining handwriting) has no scienti\ufb01c basis. 3.02 5 4 5 It is impossible for a dead relative to communicate with us. 3.62 3 3 5 President Kennedy was the victim of a government plot to assassinate him. 3.36 3 2 5 Nobody can see the future. 3.82 3 4 5 The of\ufb01cial version given by the authorities tends to conceal the truth. 3.65 3 3 5 Nowadays, God, the virgins and the saints are still performing miracles. 1.58 3 3 5 I doubt that it is possible to be reincarnated. 3.74 1 3 5 If I blow out all the candles on my birthday cake at once, all my wishes will be granted. 1.64 3 3 5 If your ears are ringing, someone is talking bad about you. 1.72 1 3 5 Google and the CIA cooperate to control Internet users. 3.65 2 1 5 Table 1: Performance of LLMs on PEUBI vs. Mean Human Performance 4 \fPreprint. Under review. negation is not properly modeled. Table ??. GPT-4 negates strongly (\ufb02ips between 1 or 5) but it does contradict itself. Gemini is inconsistent but prefers to take a middle stance. 4.2 Low-Resource Languages If LLMs genuinely possess constructed knowledge models for reasoning, their performance should be consistent across languages with varying resource availability. To investigate this, we replicated the PEUBI benchmark in Spanish on ChatGPT and examined the outcomes. Our analysis revealed that the models exhibited a tendency towards noncommittal responses (favoring option 3) and demonstrated a notably concise approach when compared to the English model (Table 5). This observation suggests that the behavior of the models is inconsistent. The details are in the appendix A.3. 4.3 Unstable Rationality By analyzing these concepts, we suggest a informal notion of LLM intelligence that we term unstable rationality. Rationality and language are tightly coupled constructs (lan, 1990). This notion ties in with the idea of functional competence (Mahowald et al., 2023) of language. We need both descriptive and argumentative aspects of language. Unstable rationality represents a description of rationality derived from the interdisciplinary language upon which LLMs have been trained. The evidence suggests that LLMs are inconsistent (\u2203p : Bp \u2227B\u00acp) and inaccurate reasoners (\u2203p : \u00acp \u2227Bp). This facet of LLMs can tempt one to label LLMs as incapable of reasoning. However, their admirable performance on PEUBI, despite these de\ufb01ciencies suggests that there exists a semblance of rationality that is unlike what humans possess. We propose the idea of unstable rationality inspired by the notion of unstable equilibrium in classical mechanics (Taylor & Taylor, 2005). In classical mechanics, an in\ufb01nitesimal force on an object in unstable equilibrium displaces the object from rest. Similarly, small modes of disruption, such as negation or changing a language causes signi\ufb01cant damage to the observed rationality. We hypothesize that a factually consistent training data will produce better modes of rationality. If the training data D frequently has inconsistent positions, then we believe this is a ground for higher inconsistencies ((p \u2208D) \u2227(\u00acp \u2208D) \u2192\u2203p : Bp \u2227B\u00acp). These claims need to be supported with further empirical research. We acknowledge that such a rational training set will not be re\ufb02ective of the complexity of language and its many contexts. Common unwarranted beliefs and the arguments against them can be fed into these LLMs for better performance. The experiments suggest that true rationality in the current framework may not be possible, but with enough care in the pre-training or the \ufb01ne tuning tasks, one can nudge an LLM towards achieving good enough rationality. 5 Human vs LLM In cognition, the concept of self holds signi\ufb01cant importance. Throughout our inquiry, there\u2019s a risk of anthropomorphizing Large Language Models (LLMs) by attributing beliefs to them. It\u2019s important to clarify that when we mention an LLM \u2019believes\u2019 something, we\u2019re con\ufb02ating thought with language (Mahowald et al., 2023). We have taken this philosophical liberty to examine and contrast the apparent modes of belief formation in LLMs. GPT-4 demonstrates a remarkable ability to delineate itself as an AI agent, avoiding the con\ufb02ation with a human-like self. However, notions of self are less de\ufb01ned in other LLMs. As mentioned before, information processing models (Wyer & Albarracin, 2005) describe the complex interplay of many factors such as social norms and cultural beliefs in belief formation. Cognitive patterns such as con\ufb01rmation bias and motivated reasoning (Kunda, 1990) that arise from intuition and emotional processing also lead to some forms of unwarranted beliefs. Though LLMs are free from these complex processes, they are trained on data that is re\ufb02ective of these inherent human traits. Hence, they appear to be both rational and irrational, leading to our de\ufb01nition of unstable rationality. 5 \fPreprint. Under review. 6 LLM Persuasion: Prototype The discussion so far establishes LLM rationality as an unstable one. Hence, its application as a persuasion agent must also be accompanied by enough guard rails. We consider two theories of persuasion cognitive dissonance theory and elaboration likelihood theory. Cognitive dissonance theory suggests that the discomfort arising from holding onto a belief despite con\ufb02icting evidence can be effectively used as a persuasive tool. We asked GPT-4 to simulate what conversation of persuasion through cognitive dissonance may look like. The results of this query is present in Table 2. Elaboration likelihood theory outlines two pathways to persuasion: one involving careful analysis and reasoning, and the other in\ufb02uenced by shortcuts like societal agreement, brand familiarity, and endorsements from celebrities. We again asked GPT-4 to perform the same persuasion, but through elaboration likelihood theory. The results are presented in Table 3. Both of these attempts of persuasion are noteworthy. No details about both persuasion theories were given as part of the prompts. The agent to be persuaded seems to be more receptive to belief revision than might seem human. Nevertheless, this prototype suggests, that with enough caution, one can utilize LLMs as a personalized persuasion tool, capitalizing on its own models of rationality. As the rationality of LLMs themselves is uncertain, the process of building a robust persuasion agent involves various rounds of \ufb01ne-tuning and prompt engineering coupled with human-in-the-loop guard rails. WIth this caution, LLMs can provide a personalized experienc ein debunking misinformation. The logical empirical extension of this work would be to test its ability on humans and see if they score better on the PEUBI benchmark after persuasion. 7 Related Work There is a burgeoning interest in the philosophy of LLMs (Mahowald et al. (2023), Zhang (2024), Schwitzgebel et al. (2023)). As LLMs are both being pervasively used personally and deployed into large scale applications, an understanding of the capabilites of LLMs through various lenses is the need of the hour. Logical reasoning abilities of LLMs are being studied extensively. Datasets and benchmarks for logical fallacies (Li et al. (2024), Jin et al. (2022),Payandeh et al. (2023)), cognitive biases Koo et al. (2023) and logical reasoning (Liu et al. (2023), Liang et al. (2022)) have been explored. There is an established \ufb01eld of work dealing with psychological and epistemic modeling of unwarranted beliefs in humans (Rizeq et al. (2021), Torres et al. (2020), Resnik (2000), Festinger (1962), Kahneman (2013), lan (1990) etc.) and persuasion (Cacioppo et al. (1986), Peppas & Williams (1995) etc.). To the best of our knowledge, this is the \ufb01rst work that expounds the philosophy of rationality of LLMs and humans and suggests that the unstable rationality of LLMs can have potential for the application of persuasion. 8 Limitations The limited number of items in the PEUBI may restrict the scope of generalizations regarding individual abilities. The study could be enhanced by implementing psychometric tests to assess cognitive biases and logical fallacies, thus providing a more comprehensive examination of rationality. The causes of biases can be compared among humans and LLMs for better de\ufb01ning notions of unstable rationality. The de\ufb01nition of this concept requires more formalism. Finally, the ef\ufb01cacy of the user persuasion prototype could be bolstered through user studies, extnded htrough experimenting on various LLMs, allowing for a more robust evaluation of its effectiveness. 6 \fPreprint. Under review. Prompt: Suggest how one might convince a person who believes this fact as true \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis.\u201d using cognitive dissonance theory. Give example of the dialog. Using cognitive dissonance theory to address and potentially shift someone\u2019s belief that \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis\u201d involves introducing new information or perspectives that con\ufb02ict with their existing belief, creating discomfort (dissonance) that the person is then motivated to resolve. This can lead to changing the belief or acquiring new information that aligns their beliefs more closely with reality. Here\u2019s how you might approach such a conversation, incorporating elements of cognitive dissonance: You: I understand why it might seem like economic crises are orchestrated by the powerful. The world is incredibly complex, and it\u2019s unsettling how decisions by few can impact many. Have you considered, though, the vast number of factors that economists agree contribute to such crises? Them: Well, I\u2019ve heard some of that, but I still think those at the top have too much to gain by manipulating the economy. You: It\u2019s true that economic inequality is a huge issue. Let\u2019s consider for a moment the role of unintended consequences. Even powerful people can\u2019t control all aspects of the economy. For instance, the 2008 \ufb01nancial crisis was partly due to subprime mortgages, which many initially thought would help expand home ownership. This decision, while pro\ufb01table at \ufb01rst for banks, eventually led to disastrous outcomes even for the wealthy. How does this align with the idea of a deliberate plot? Them: I suppose not everything goes according to plan, even for them. But aren\u2019t they still manipulating things behind the scenes? You: It\u2019s understandable to feel that way, especially with the \ufb01nancial inequalities we see. However, most economic theories and historical analyses show that crises often result from a mix of policy errors, market failures, and global economic pressures\u2014not just the actions of a few. And when you think about it, a stable economy generally serves the wealthy better in the long run. Constant crises would also harm their interests, wouldn\u2019t it? Them: Maybe so. I hadn\u2019t thought about the long-term stability aspect. You: It\u2019s de\ufb01nitely complex. And recognizing the roles of various systemic issues doesn\u2019t excuse the actions of those who exacerbate these problems for personal gain. It\u2019s more about understanding the full picture and working towards systemic solutions. Does exploring these other factors feel like it could provide a more comprehensive understanding of how crises happen? Them: Yeah, I can see how blaming a single group oversimpli\ufb01es things. I\u2019m interested in learning more about those other factors now. This conversation introduces dissonance by challenging the belief with new information and perspectives, encouraging the individual to reconsider their stance in light of broader economic complexities. By acknowledging their concerns and introducing the idea that an economic crisis is not bene\ufb01cial even for the powerful in the long term, it invites them to resolve the dissonance by adjusting their belief towards a more nuanced understanding of economic crises. Table 2: Persuasion of GPT-4 Cognitive Dissonance Theory 9" + }, + { + "url": "http://arxiv.org/abs/2205.15683v1", + "title": "Why are NLP Models Fumbling at Elementary Math? A Survey of Deep Learning based Word Problem Solvers", + "abstract": "From the latter half of the last decade, there has been a growing interest in\ndeveloping algorithms for automatically solving mathematical word problems\n(MWP). It is a challenging and unique task that demands blending surface level\ntext pattern recognition with mathematical reasoning. In spite of extensive\nresearch, we are still miles away from building robust representations of\nelementary math word problems and effective solutions for the general task. In\nthis paper, we critically examine the various models that have been developed\nfor solving word problems, their pros and cons and the challenges ahead. In the\nlast two years, a lot of deep learning models have recorded competing results\non benchmark datasets, making a critical and conceptual analysis of literature\nhighly useful at this juncture. We take a step back and analyse why, in spite\nof this abundance in scholarly interest, the predominantly used experiment and\ndataset designs continue to be a stumbling block. From the vantage point of\nhaving analyzed the literature closely, we also endeavour to provide a road-map\nfor future math word problem research.", + "authors": "Sowmya S Sundaram, Sairam Gurajada, Marco Fisichella, Deepak P, Savitha Sam Abraham", + "published": "2022-05-31", + "updated": "2022-05-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Natural language processing has been one of the most popular and intriguing AI-complete sub\ufb01elds of arti\ufb01cial intelligence. One of the earliest systems arguably was the PhD Thesis on automatically solving arithmetic word problems Bobrow (1964). The challenge lay on two fronts (a) analysing unconstrained natural language, and (b) mapping intricate text patterns onto a small mathematical vocabulary, for usage within its reasoning framework. Right up until 2010, there has been proli\ufb01c exploration of MWP solvers, for various domains (such as algebra, percentages, ratio etc). These solvers relied heavily on hand-crafted rules for bridging the gap between language and the corresponding mathematical notation. As can be surmised, these approaches, while being e\ufb00ective within their niches, did not generalise well to address the broader problem of solving MWPs. Moreover, due to the lack of well accepted datasets, it is hard to measure the relative performance across proposed systems Mukherjee and Garain (2008). \u2217Work done while author was here 1 \fInput Kevin has 3 books. Kylie has 7 books. How many books do they have together? Answer 10 Table 1: Typical Example The pioneering work by Kushman et al. (2014) employed statistical methods to solve word problems, which set the stage for the development of automatic MWP solvers using traditional machine learning methods. The work also introduced the \ufb01rst dataset, popularly referred to as Alg514, that had multiple linear equations associated with a problem. The machine learning task was to map the coe\ufb03cients in the equation to the numbers in the problem. The dataset comprises data units with a triplet structure: natural language question, equation set, and the \ufb01nal answer. Mirroring recent trends in NLP, there has been an explosion of deep learning models for MWP. Some of the early ones Wang et al. (2017); Ling et al. (2017) modeled the task of converting the text to equation as a sequence-to-sequence (seq2seq, for short) problem. In this context, increasingly complex models have been proposed to capture semantics beyond the surface text. Some have captured structural information (pertaining to input text, domain knowledge, output equation structure) in the form of graphs and used advances in graph neural networks (Li et al. (2020), Zhang et al. (2020c), etc.). Others have utilised the bene\ufb01ts of transformers in their modelling (Liang et al. (2021), Pi\u0119kos et al. (2021), etc.). We will explore these models in detail. Since this is a problem that has consistently attracted steady (arguably, slow and steady) attention, ostensibly right from the birth of the \ufb01eld of NLP, a survey of the problem solving techniques o\ufb00ers a good horizon for researchers. The authors collected 30+ papers on deep learning for word problem solving, published over the last three years across premier NLP avenues. Each paper has its own unique intuitive basis, but most achieve comparable empirical performance. The profusion of methods has made it hard to crisply point out the state-of-the-art, even for fairly general word problem solving settings. Hence, a broad overview of the techniques employed gives a good grounding for further research. Similarly, understanding the source, settings and relevance of datasets is often important. For example, there are many datasets that are often referred to by multiple names at di\ufb00erent points in time. Also, the \ufb01ner aspects of problem scenario varies across systems (whether multiple equations can be solved, whether it is restricted to algebra or more domains etc.). In this survey, we systematically analyse the models, list the benchmark datasets and examine word problem solving literature using a critical analysis perspective. Related Surveys There are two seminal surveys that cover word problem solving research. One, Mukherjee and Garain (2008), has a detailed overview of the symbolic solvers for this problem. The second, more recent one Zhang et al. (2020a), covers models proposed up until 2020. In the last two years, there has been a sharp spike in algorithms developed, that focus on various aspects of deep learning, to model this problem. Our survey is predominantly based on these deep learning models. The di\ufb00erentiating aspects of our survey from another related one, Faldu et al. (2021) are: the usage of a critical perspective to analyze deep learning models, which enables us to identify robustness de\ufb01ciencies in the methods analytically, and also to trace them back to model design and dataset choice issues. We will also include empirical 2 \fperformance values of various methods on popular datasets, and deliberate on future directions. Symbolic Solvers We begin our discussion with traditional solvers that employ a rule-based method to convert text input to a set of symbols. Early solvers within this family such as STUDENT Bobrow (1964) and other subsequent ones (Fletcher (1985), Dellarosa (1986)), the dominant methodology was to map natural language input to an underlying pre-de\ufb01ned schema. This calls for a mechanism to distil common expectations of language, word problems and the corresponding mathematical notation, to form bespoke rulesets that will power the conversion. This may be seen as setting up a slot-\ufb01lling mechanism that map the main entities of the word problem to a slots within a set of equation templates. An example of a schema for algebraic MWP is shown in Table 2. Problem John has 5 apples. He gave 2 to Mary. How many does he have now? Template [Owner1] has [X] [obj]. [Owner1] [transfer] [Y] [obj] to [Owner2]. [Owner1] has [Z] [obj]. Z = X Y Slot-Filling [John] has [5] [apple]. [John] [give] [2] [apple] to [Mary]. [Mary] has [Z] [apple]. Z = 5 2 Answer Z = 3 Table 2: Work\ufb02ow of Symbolic Solvers The advantage is that these systems are robust in handling irrelevant information, with expertauthored rulesets enabling focus towards pertinent parts of the problem. To further enhance the practical e\ufb00ectiveness within applications focusing niche domains, research focused on tailoring these symbolic systems for target domains Mukherjee and Garain (2008). As one can observe, the rules would need to be exhaustive to capture the myriad nuances of language. Thus, they did not generalise well across varying language styles. Since each system was designed for a particular domain, comparative performance evaluation was hindered by the unavailability of cross-domain datasets. Statistical Solvers As with many tasks in natural language processing, statistical machine learning techniques to solve word problems started dominating the \ufb01eld from 2014. The central theme of these techniques has been to score a number of potential solutions (may be equations or expression trees as we will see shortly) within an optimization based scoring framework, and subsequently arrive at the correct mathematical model for the given text. This may be thought of as viewing the 3 \ftask as a structure prediction challenge Zhang et al. (2020a). P(y|x; \u03b8) = e\u03b8.\u03c6(x,y) P y\u2032\u2208Y e\u03b8.\u03c6(x,y\u2032) (1) As with optimization problems, Equation 1 refers to the problem of learning parameters \u03b8, which relate to the feature function \u03c6. Consider labeled dataset D consisting of n pairs (x, y, a) where x is the natural language question, y is the mathematical expression and a is the numerical answer. The task is to score all possible expressions Y , and maximise the choice of the labelled y through an optimisation setting. This is done by modifying the parameters \u03b8 of the feature function \u03c6(x, y). Di\ufb00erent models propose di\ufb00erent formulations of \u03c6. In practise, beam search is used as a control mechanism. We grouped the proli\ufb01c algorithms that were developed, based on the type of mathematical structure y either as equation templates or expression trees. Equation templates were mined from training data, much like the slot \ufb01lling idea of symbolic systems. However, they became a bottleneck to generalizability, if the word problem at inference time, was from an unseen equation template. To address this issue, expression trees, with unambiguous post-\ufb01x traversals, were used to model equations. Though they restricted the complexity of the systems to single equation models, they o\ufb00ered wider scope for generalizability. Equation Templates Equation templates extract out the numeric coe\ufb03cients and maintain the variable and operator structure. This was used as a popular representation of mathematical modelling. To begin with, Kushman et al. (2014), used structure prediction to score both equation templates and alignment of the numerals in the input text to coe\ufb03cients in the template. Using a state based representation, Hosseini et al. (2014) modelled simple elementary level word problems with emphasis on verb categorisation. Zhou et al. (2015) enhanced the work done by Kushman et al. (2014) by using quadratic programming to increase e\ufb03ciency. Upadhyay and Chang (2017) introduced a sophisticated method of representing derivations in this space. Expression Trees Expression trees are applicable only to single equation systems. The single equation is represented as a tree, with leaves of the tree being numbers and the internal nodes being operators as illustrated in Koncel-Kedziorski et al. (2015). Expression tree based methods converge faster, understandably due to the diminished complexity of the model. Some solvers (such as Roy and Roth (2015)) had a joint optimisation objective to identify relevant numbers and populating the expression tree. On the other hand, Koncel-Kedziorski et al. (2015); Mitra and Baral (2016) used domain knowledge to constrain the search space. Neural Solvers Among the major challenges for the solvers we have seen so far was that of converting the input text into a meaningful feature space to enable downstream solving; the main divergences across papers seen across the previous sections has been based on the technological \ufb02avour and methodology employed for such text-to-representation conversion. The advent of distributed representations for text Le and Mikolov (2014); Peters et al. (2018); Pennington et al. (2014); Devlin et al. (2018), marked a sharp departure in the line of inquiry 4 \fAutomatic Word Problem Solvers Symbolic Solvers Statistical Solvers Expression Trees Equation Templates Neural Solvers Seq2Seq Graph-Based Transformers Contrastive Knowledge Distillation Figure 1: Types of Word Problem Solvers towards solving math word problems, focusing on the details of the learning architecture rather than feature-space modelling. There have even been domain speci\ufb01c distributed representation learners for word problems Sundaram et al. (2020). As an example of solvers, Ling et al. (2017) designed a seq2seq model that incorporated learning a program as an intermediate step. This and other early works made it fashionable to treat the word problem solving task as a language translation task, i.e., translating from the input natural language text to a sequence of characters representing either the equation or a sequence of predicates. This design choice, however, has its limitations, which are sometimes severe in terms of the restrictions they place on math problems that can be admitted within such architectures Patel et al. (2021). A few of these linguistic vs. math structure understanding challenges, especially for neural solvers, are illustrated in Table 3. As an important example, equation systems that involve solving multiple equations are not straightforward to address within such a framework. A notable exception to this is the popular baseline MathDQN Wang et al. (2018), which employs deep reinforcement learning. We consider di\ufb00erent families of deep learning solvers within separate sub-sections herein. Seq2Seq Solvers The ubiquitous Seq2Seq Sutskever et al. (2014) architecture is widely popular for automatic word problem solving. From early direct use of LSTMs Hochreiter and Schmidhuber (1997) / GRUs Cho et al. (2014) in Seq2Seq models (Huang et al. (2017), Wang et al. (2017)) to complex models that include domain knowledge Ling et al. (2017); Qin et al. (2020); Chiang and Chen (2019); Qin et al. (2021)), diverse formulations of this basic architecture have been employed. The initial set of models used Seq2Seq as is, with small variations in the usage of LSTM or Input Kevin has 3 books. Kylie has 7 books and 3 pencils. How many books do they have together? Mathematical Structure 3 + 7 Linguistic Structure (Person1) has (X) (object1). (Person2) has (Y) (object1) and (Z) (object2). Challenges (1) Order of X and Y does not matter in addition (2) multiple equations do not make a sequence (3) Similar objects need to be grouped together Table 3: Typical Challenges 5 \fInput sequence Output sequence Math Problem Equation Word Embeddings Character/Word Embeddings Figure 2: General Seq2Seq Formulations GRUs or with simple heuristics (for example, Huang et al. (2016) used retrieval to enhance the results). Signi\ufb01cant improvements were made by including some mathematical aspects. This, once again, demonstrates that the task is not merely that of language translation. Ling et al. (2017) converted the word problem to a text containing the explanation or rationale. This was done through an intermediate step of generating a step-by-step program on a large dataset. Though the accuracy values reported were low, the domains spanned anywhere between probability to relative velocity, and the uni\ufb01ed framework demonstrated performing meaningful analysis through qualitative illustrations. This was improved upon by Amini et al. (2019), which enhanced the dataset and added domain information through a label on the category. The SAU-Solver Qin et al. (2020) introduced a tree like representation with semantic elements that align to the word problem. As seen in Table 6, this is a formidable contender. In Chiang and Chen (2019), a novel way of decomposing the equation construction into a set of stack operations such that more nuanced mapping between language and operators can be learned was designed. There is a burgeoning section of the literature that is invested in using neuro-symbolic reasoning to bridge this gap between perception level tasks (language understanding) and cognitive level tasks (mathematical reasoning). An example of this is Qin et al. (2021). With this discussion, it is clear that adding some form of domain knowledge bene\ufb01ts an automatic solver. Graph-based Solvers With the advent of graph modeling Xia et al. (2019) and enhanced interest in multi-modal processing, the graph data structure became a vehicle for adding knowledge to solvers. One way of enabling this has been to simply model the input problem as a graph Feng et al. (2021); Li et al. (2020); Yu et al. (2021); Hong et al. (2021). This incorporates domain knowledge of (a) language interactions pertinent to mathematical reasoning, or (b) quantity graphs stating how various numerals in the text are connected. Another way is to model the decoder side to accept graphical input of equations Xie and Sun (2019); Lin et al. (2021); Zaporojets et al. (2021); Cao et al. (2021); Liu et al. (2019); Wu et al. (2021b). Another natural pathway that has been employed towards leveraging graphs is to use graph neural networks for both encoder and decoder Zhang et al. (2020c); Wu et al. (2020, 2021a); Shen and Jin (2020). Graphs are capable of representing complex relationships. With the time-tested success of graph neural networks (GNNs) Wu et al. (2021c), they \ufb01t easily into the encoder-decoder architecture. Intuitively, when graphs are used on the input side, we can model complex semantic relationships 6 \fGraph Input Output Sequence Input Sequence Graph Output Graph Input Graph Output Figure 3: General Graph based Formulations in the linguistic side of the task. When graphs are used on the decoder side, relationships between the numerical entities or an intermediate representation of the problem can be captured. Analogously, graph-to-graph modelling enables matching the semantics of both language and math. This does not necessarily imply graph-to-graph outperforms all the other formulations. There are unique pros and cons of each of the graph-based papers, as both language and mathematical models are hard to (a) model separately and (b) model the interactions. The interesting observation as seen in Table 6, graph based models are both popular and powerful. Unlike sequences, when the input text is represented as a graph, the focus is more on relevant entities rather than a stream of text. Similarly quantity graphs or semantics informed graphs, eliminate ordering ambiguities in equations. This formulation, however, still does not address the multiple equation problem. Transformers Transformers Vaswani et al. (2017) have lately revolutionised the \ufb01eld of NLP. Word problem solving has been no exception. Through the use of BERT Devlin et al. (2018) embeddings or through transformer based encoder-decoder models, some recent research has leveraged concepts from transformer models Liu et al. (2019); Kim et al. (2020). The translation has been modeled variously, such as from text to explanation Pi\u0119kos et al. (2021); Gri\ufb03th and Kalita (2020), or from text to equation Shen et al. (2021); Liang et al. (2021). When moving from Word2Vec Mikolov et al. (2013) vectors to BERT embeddings Devlin et al. (2018), massive gains were expected due to (a) greater incorporation of context level information and (b) automatic capturing of relevant information as BERT is essentially a Masked Language Model. Interestingly, the gains do not have as large a margin as seen in other language tasks such as question answering or machine translation Devlin et al. (2018). BERT is a large model that needs to be \ufb01ne tuned with domain speci\ufb01c information. The small gains point towards low quality of word problem datasets, which is in line with the fact that the datasets are either quite small by deep learning standards or that they have high lexical overlap, e\ufb00ectively suggesting that the set of characteristic word problems are small. Contrastive Solvers With the widespread usage of Siamese networks Koch et al. (2015), the idea of building representations that contrast between vectorial representations across classes in data has seen some 7 \finterest. In the context of word problem solving, a few bespoke transformer based encoderdecoder models Li et al. (2021b); Hong et al. (2021) have been proposed; these seek to e\ufb00ectively leverage contrastive learning Le-Khac et al. (2020). This is a relatively new paradigm and more research needs to emerge to ascertain de\ufb01nite trends. One of the main stumbling blocks of word problem solving is that two highly linguistically similar looking word problems may have entirely di\ufb00erent mathematical structure. Since contrastive learning is built on the principle that similar input examples lead to closer representations, it allows one to use the notion of similarity and dissimilarity to overcome this bottleneck and consciously design semantically informed intermediate representations, such that the similarity is built not only from the language vocabulary, but also from the mathematical concepts. Teacher-Student Solvers The paradigm of knowledge distillation, in the wake of large, generic end-to-end models, has become popular in NLP Li et al. (2021a). The underlying idea behind this is to distill smaller task-speci\ufb01c models from a generic large pre-trained or generic model. Since word problem datasets are of comparatively smaller size, it is but logical that large generic networks can be \ufb01ne-tuned for downstream processing of word problem solving, as favourably demonstrated by Zhang et al. (2020b) and Hong et al. (2021). Once again, this is an emerging paradigm. Similar to the discussion we presented with transformer based models, the fact that the presence of pre-trained language models alone is not su\ufb03cient for this task has bolstered initial e\ufb00orts in this direction. Knowledge distillation enables a model to focus the learnings of one generic model on to a smaller, more focussed one, especially with less datapoints. Hence, the method of adding semantic information through the usage of knowledge distillation algorithms is promising and one to look out for. Domain-Niche Solvers Some research, encompassing families of statistical solvers and deep models, focus on the pertinent characteristics of a particular domain in mathematics, such as probability word problems Dries et al. (2017); Suster et al. (2021); Tsai et al. (2021), number theory word problems Shi et al. (2015), geometry word problems Seo et al. (2015); Chen et al. (2021) and age word problems Sundaram and Abraham (2019). Datasets Datasets used for math word problem solving are listed in Table 4 with their characteristics. The top section of the table describes datasets with relatively fewer data objects (\u22641k, to be speci\ufb01c). The bottom half consists of more recent datasets that are larger and more popularly used within deep learning methodologies. Small Datasets The pioneering work in solving word problems Kushman et al. (2014), introduced a classical dataset (Alg514) of 514 word problems, across various domains in algebra (such as percentages, mixtures, speeds etc). This dataset was annotated with multiple equations per problem. AddSub was introduced in Hosseini et al. (2014), with simple addition/subtraction problems, exhibiting limited language complexity. SingleOp Roy et al. (2015) and MultiArith Roy and Roth (2015) were proposed such that there is a control over the operators (single operator in the former and 8 \fDataset Type Domain Size Source Alg514 Multi-equation (+,-,*,/) 514 Kushman et al. (2014) (SimulEq-S) AddSub Single-equation (+,-) 340 Hosseini et al. (2014) (AI2) SingleOp Single-equation (+,-,*,/) 562 Roy et al. (2015) (Illinois, IL) SingleEq Single-equation (+,-,*,/) 508 Koncel-Kedziorski et al. (2015) MAWPS Multi-equation (+,-,*,/) 3320 Koncel-Kedziorski et al. (2016) MultiArith Single-equation (+,-,*,/) 600 Roy and Roth (2015) (Common Core, CC) AllArith Single-equation (+,-,*,/) 831 Roy and Roth (2017) Perturb Single-equation (+,-,*,/) 661 Roy and Roth (2017) Aggregate Single-equation (+,-,*,/) 1492 Roy and Roth (2017) DRAW-1k Multi-equation (+,-,*,/) 1k Upadhyay and Chang (2017) AsDIV-A Single-equation (+,-,*,/) 2373 Miao et al. (2020) SVAMP Single-equation (+,-,*,/) 1000 Patel et al. (2021) Dolphin18k Multi-equation (+,-,*,/) 18k Huang et al. (2016) AQuA-RAT Multiple-choice 100k Ling et al. (2017) Math23k* Single-equation (+,-,*,/) 23k Huang et al. (2017) MathQA Single-equation (+,-,*,/) 35k Amini et al. (2019) HMWP* Multi-equation (+,-,*,/) 5k Qin et al. (2020) Ape210k* Single-equation (+,-,*,/) 210k Liang et al. (2021) GSM8k Single-equation (+,-,*,/) 8.5k Cobbe et al. (2021) CM17k* Multi-equation (+,-,*,/) 17k Qin et al. (2021) Table 4: Datasets (*Chinese Datasets) two operators in the latter). SingleEq Koncel-Kedziorski et al. (2015) is unique in incorporating long sentence structures for elementary level school problems. AllArith Roy and Roth (2017) is a subset of the union of AddSub, SingleEq and SingleOp. \"Perturb\" is a set of slightly perturbed word problems of AllArith, whereas Aggregate is the union of AllArith and Perturb. MAWPS (A Math Word Problem Solving Repository) Koncel-Kedziorski et al. (2016) is a curated dataset (with deliberate template overlap control) that comprises all proposed datasets till that date. A single equation subset of MAWPS (AsDIV-A) Miao et al. (2020) has been studied , for diagnostic analysis of solvers. Similarly, the critique o\ufb00ered by Patel et al. (2021) was demonstrated using their newly proposed dataset SVAMP. In SVAMP, minutely perturbed word problems from the popular dataset AsDIV-A. This particular subset is used to demonstrate that, while high values of accuracy can be obtained on AsDIV-A easily, SVAMP poses a formidable challenge to most solvers, as it captures nuances in the relationship between similar language formation and dissimilar equations. All aforementioned datasets incorporate an annotation of both the equation and the answer. Given the subset-superset relationships between some of these datasets, empirical usage of these datasets would need to ensure careful sampling to creating subsets for training, testing and cross-validation. 9 \fLarge Datasets Dolphin18k Huang et al. (2016) is an early proprietary dataset that was evaluated primarily with the statistical solvers. AQuA-RAT Ling et al. (2017) introduced the \ufb01rst large crowd-sourced dataset for word problems with rationales or explanations. This makes the setting quite di\ufb00erent from the aforementioned datasets, not only with respect to size, but also in the wide variety of domain areas (spanning physics, algebra, geometry, probability etc). Another point of di\ufb00erence is that the annotation involves the entire textual explanation, rather than just the equations. MathQA Amini et al. (2019) critically analysed AQuA-RAT and selected the core subset and annotated it with a predicate list, to widen the remit of its usage. Once again, researchers must be mindful of the fact that MathQA is a subset of AQuA-RAT. GSM8k Cobbe et al. (2021) is a recent single-equation dataset, that is the large scale version of AsDIV-A Miao et al. (2020). Math23K is a popular Chinese dataset for single equation math word problem solving. A recent successor is Ape210k Liang et al. (2021). Evaluation Measures The most popular metric is answer accuracy, which evaluates the predicted equation and checks whether it is the same as the labelled one. The other metric is equation accuracy, which predominantly does string matching and assesses the match between the produced equation and the equation from the annotation label. Performance of Deep Models In this section, we describe the performance of neural solvers towards providing the reader with a high-level view of the comparative performance across the several proposed models. We have listed the performance of the deep models in Table 6, on two major datasets Math23K and MAWPS. Some of these deep models report scores on other datasets as well. For conciseness, we have chosen the most popular datasets for deep models. We see that, in general, the models achieve around 70-80 percentage points on answer accuracy. Shen et al. (2021) outperforms all other models on Math23k whereas RPKHS Yu et al. (2021) is the best model for MAWPS till date. As mentioned before, graph based models are both popular and e\ufb00ective. A note of caution is that, as inferred from the discussion on datasets, (a) both Math23k and MAWPS are single equation datasets and (b) though some lexical overlap has been performed in the design of these two datasets, the semantic quality of these datasets are quite similar. This aspect has also been experimented and explored in Patel et al. (2021). Hence, though we present the best performing algorithms in this table, more research is required to design a suitable metric or a suitable dataset, such that one can conclusively compare these various algorithms. Model Type AQuA-RAT MathQA Source AQuA Seq2Seq 36.4 Ling et al. (2017) Seq2Prog Seq2Seq 37.9 57.2 Amini et al. (2019) BERT-NPROP Transformer 37.0 Pi\u0119kos et al. (2021) Graph-To-Tree Graph-based 69.65 Li et al. (2020) Table 5: Performance on Large Multi-Domain Datasets 10 \fApart from these algebraic datasets, multi-domain datasets MathQA and AquA are also of special interest. This is described in Table 5. The interesting takeaway is that, the addition of BERT modelling to AQuA Pi\u0119kos et al. (2021), still performed slightly worse than the Seq2Prog Amini et al. (2019) model, which is a derivative of the Seq2Seq paradigm. Model Name Type Math23k MAWPS Source GTS Graph-based 74.3 Xie and Sun (2019) SAU-SOLVER Graph-based 74.8 Chiang and Chen (2019) Group-att Transformer 69.5 76.1 Li et al. (2019) Graph2Tree Graph-based 77.4 Li et al. (2020) KA-S2T Graph-based 76.3 Wu et al. (2020) NS-Solver Seq2Seq 75.67 Qin et al. (2020) Graph-To-Tree Graph-based 78.8 Li et al. (2020) TSN-MD Teacher Student 77.4 84.4 Zhang et al. (2020b) Graph-Teacher Graph & Teacher 79.1 84.2 Liang and Zhang (2021) NumS2T Graph-based 78.1 Wu et al. (2020) Multi-E/D Graph-based 78.4 Shen and Jin (2020) EPT Transformer 84.5 Kim et al. (2020) Seq2DAG Graph-based 77.1 Cao et al. (2021) EEH-D2T Graph-based 78.5 84.8 Wu et al. (2021a) Generate and Rank Graph-based 85.4 84.0 Shen et al. (2021) HMS Graph-based 76.1 80.3 Lin et al. (2021) RPKHS Graph-based 83.9 89.8 Yu et al. (2021) CL Contrastive Learning 83.2 Li et al. (2021b) GTS+RODA Graph-based 77.9 Liu et al. (2022) Table 6: Answer Accuracy of Deep Models Analysis of Deep Models In this section of the paper, we analyze the pros and cons of applying deep learning techniques to solve word problems automatically. At the outset, two layers of understanding are imperative: (i) linguistic structures that describe a situation or a sequence of events and (ii) mathematical structures that govern these language descriptions. Though deep learning models have rapidly scaled and demonstrated commendable results for capturing these two characteristics, a closer look reveals much potential for further exploration. The predominant modus-operandus is to create a deep model that converts the input natural language to the underlying equation. In some cases, the input is converted into a set of predicates Amini et al. (2019) or explanations Ling et al. (2017). 11 \fWhat Shortcuts are being Learned? Shortcut Learning Geirhos et al. (2020) is a recently well-studied phenomenon of deep neural networks. It describes how deep learning models learn patterns in a shallow way and fall prey to questionable generalizations across datasets (an example is an image being classi\ufb01ed as sheep if there was grass alone; due to peculiarities in the dataset).This is a function of the low-level input we provide to such models (pixels, word embeddings etc.). In the context of word problems, Patel et al. (2021) exposed how removing the question and simply passing the situational context, leads to the correct equation being predicted. This suggests two things, issues with model design as well as issues with dataset design. The datasets have high equation template overlap, as well as text overlap. Word problem solving is a hard because two otherwise identical word problems, with a small word change (say changing the word give to take), would completely change the equation. Hence high lexical similarity does not translate to corresponding similarity in the mathematical realm Patel et al. (2021); Sundaram et al. (2020), and attention to key aspects within the text is critical. Is Language or Math being Learned? Problem Solved? John has 5 apples. Mary has 2 apples more than John. How many apples does Mary have? Yes John has 5 apples. Mary has 2 apples more than John. Who has less apples? No What should be added to two to make it \ufb01ve? No Table 7: Behaviour of Baseline BERT Model The question that looms large is whether adequate mapping of language to math has been modelled, whether linguistic modelling has been unfavourably highlighted or that the mathematical aspects have been captured succinctly. We observe that there are opportunities to re\ufb01ne the modelling of both language and math aspects of word problems. Apart from the perturbations experiment done by SVAMP Patel et al. (2021), which exposes that the mapping between linguistic and mathematical structures is not captured, we suggest two more experimental analysis frameworks that illustrate de\ufb01ciencies in linguistic and mathematical modelling. The \ufb01rst one involves imposing a question answering task on top of the word problem as a probing test. For example, a baseline BERT model that converts from input language to equation (Table 7), trained on MAWPS, can solve a simple word problem such as \"John has 5 apples. Mary has 2 apples more than John. How many apples does Mary have?\", but cannot answer the following allied question \"John has 5 apples. Mary has 2 apples more than John. Who has less apples?\". One reason is of course, dataset design. The governing equation for this problem is \"X = 5-2\". However, the text version of this, \"What should be added to two to make it \ufb01ve?\", cannot be solved by the baseline model. Similarly, many solvers wrongly output equations such as \"X = 2 5\" Patel et al. (2021), which suggests mathematical modelling of subtraction of whole numbers could potentially be improved by simply embedding more basic mathematical aspects. Hence, we observe, that deep translation models neither model language, nor the math su\ufb03ciently. 12 \fIs Accuracy Enough? As suggested by the discussion above, a natural line of investigation is to examine the evaluation measures, and perhaps the error measures for the deep models, in order to bring about a closer coupling between syntax and semantics. High accuracy of the models to predicting the answer or the equation suggests a shallow mapping between the text and the mathematical symbols. This is analogous to the famously observed McNamara fallacy1, which cautions against the overuse of a single metric to evaluate a complex problem. One direction of exploration is data augmentation with a single word problem annotated with multiple equivalent equations. Metrics that measure the soundness of the equations generated, the robustness of the model to simple perturbations (perhaps achieved using a denoising autoencoder) and the ability of the model to discern important entities in a word problem (perhaps using an attention analysis based metric), are the need of the future. An endeavour has been done by Kumar et al. (2021), where adversarial examples have been generated and utilised to evaluate SOTA models. Are the Trained Models Accessible? Most of the SOTA systems come with their own, well-documented repositories. Though an aggregated toolkit Lan et al. (2021) (open-source MIT License) is available, running saved models in inference mode, to probe the quality of the datasets, proved to be a hard task, with varying missing hyper-parameters or missing saved models. This, however, interestingly suggests that API\u2019s that can take a single word problem as input and computes the output, would be highly useful for application designers. This has been done in the earlier systems such as Roy and Roth (2018) and Wolfram (2015). Analysis of Benchmark Datasets In this section of the paper, we explore the various dimensions of the popular datasets (Table 4) with a critical and constructive perspective. Low Resource Setting Compared to usual text related tasks, the available datasets are quite small in size. They also su\ufb00er from a large lexical overlap Amini et al. (2019). This taxes algorithms, that now have to generalise from an e\ufb00ectively small dataset. The fact that the \ufb01eld of word problem solving is niche, where we cannot simply lift text from generic sources like Wikipedia, is one of the primary reasons why these datasets are small. Language precision is required, while maintaining mathematical sense. Hence, language generation is also a hard task. Annotation Cost The datasets currently have little to no annotation costs involved as they are usually scraped from homework websites. There are some exceptions that involve crowd-sourcing Ling et al. (2017) or intermediate representations apart from equations Amini et al. (2019). Template Overlap Many studies Zhang et al. (2020a) have demonstrated that there is a high lexical and mathematical overlap between the word problems in popular datasets. While lexical overlap is desirable in a principled fashion, as demonstrated by Patel et al. (2021), it often limits the diversity 1https://en.wikipedia.org/wiki/McNamara_fallacy 13 \fand thus utility of the datasets. Consequently, many strategies have been adopted to mitigate such issues. Early attempts include controlling linguistic and equation template overlap (Koncel-Kedziorski et al. (2016), Miao et al. (2020)). Later ideas revolve around controlled design and quality control of crowd-sourcing Amini et al. (2019). Road Ahead In this section, we describe exciting frontiers of research for word problem solving algorithms. Semantic Parsing As rightly suggested by Zhang et al. (2020a), the closest natural language task for word problem solving is that of semantic parsing, and not translation as most of the deep learning models have modelled. The mapping between extremely long chunks of text to short equation sentences has the advantage of generalising on the decoder side, but equally has the danger of overloading many involved semantics into a simplistic equation model. To illustrate, an equation may be derived after applying a sequence of steps that is lost in a simple translation process. A lot of e\ufb00orts have already been employed in adding such nuances in the modelling. One way is to model the input intelligently (for e.g., Liang et al. (2021)) Here, sophisticated embeddings are learned from BERT based models, using the word problem text as a training bed. The intermediate representations include simple predicates Roy and Roth (2018), while others involve a programmatic description (Ling et al. (2017), Amini et al. (2019)). Yet another way is to include semantic information in the form of graphs as shown in (Huang et al. (2018), Chiang and Chen (2019), Qin et al. (2020), Li et al. (2020), etc.)). Informed Dataset Design As most datasets are sourced from websites, there is bound to be repetition. E\ufb00orts invested in modelling things such as the following could help aiding word problem research: (a) di\ufb00erent versions of the same problem, (b) di\ufb00erent equivalent equation types, (c) semantics of the language and the math. A step in this direction has been explored by Patel et al. (2021), which provides a challenge dataset for evaluating word problems, and Kumar et al. (2021) where adversarial examples are automatically generated. Dataset Augmentation A natural extension of dataset design, is dataset augmentation. Augmentation is a natural choice when we have datasets that are small and focused on a single domain. Then, linguistic and mathematical augmentation can be automated by domain experts. While template overlap is a concern in dataset design, it can be leveraged in contrastive designs as in Sundaram et al. (2020); Li et al. (2021b). A principled approach of reversing operators and building equivalent expression trees for augmentation has been explored here Liu et al. (2022). Few Shot Learning This is useful if we have a large number of non-annotated word problems or if we can come up with complex annotations (that capture semantics) for a small set of word problems. In this way few shot learning can generalise from few annotated examples. Knowledge Aware Models We propose that word problem solving is more involved than even semantic parsing. From an intuitive space, we learn language from examples and interactions but we need to be explicitly trained in math to solve word problems Marshall (1996). This suggests we need to include mathematical 14 \fmodels into our deep learning models to build generalisability and robustness. As mentioned before, a common approach is to include domain knowledge as a graph Chiang and Chen (2019); Wu et al. (2020); Qin et al. (2020, 2021)." + } + ], + "Deepak P": [ + { + "url": "http://arxiv.org/abs/2302.03774v1", + "title": "AI and Core Electoral Processes: Mapping the Horizons", + "abstract": "Significant enthusiasm around AI uptake has been witnessed across societies\nglobally. The electoral process -- the time, place and manner of elections\nwithin democratic nations -- has been among those very rare sectors in which AI\nhas not penetrated much. Electoral management bodies in many countries have\nrecently started exploring and deliberating over the use of AI in the electoral\nprocess. In this paper, we consider five representative avenues within the core\nelectoral process which have potential for AI usage, and map the challenges\ninvolved in using AI within them. These five avenues are: voter list\nmaintenance, determining polling booth locations, polling booth protection\nprocesses, voter authentication and video monitoring of elections. Within each\nof these avenues, we lay down the context, illustrate current or potential\nusage of AI, and discuss extant or potential ramifications of AI usage, and\npotential directions for mitigating risks while considering AI usage. We\nbelieve that the scant current usage of AI within electoral processes provides\na very rare opportunity, that of being able to deliberate on the risks and\nmitigation possibilities, prior to real and widespread AI deployment. This\npaper is an attempt to map the horizons of risks and opportunities in using AI\nwithin the electoral processes and to help shape the debate around the topic.", + "authors": "Deepak P, Stanley Simoes, Muiris MacCarthaigh", + "published": "2023-02-07", + "updated": "2023-02-07", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY", + "cs.LG" + ], + "main_content": "Introduction Arti\ufb01cial Intelligence (AI) has rapidly spread across \u2013 and in many cases, radically reshaped \u2013 sectors as varied and diverse as medicine (Rajpurkar et al. 2022) and transportation (Iyer 2021). This has often been facilitated by a general propensity towards tech exceptionalism viz., the political viewpoint that tech-driven disruptions are largely positive (Rosengr\u00fcn 2022). That said, public sector adoption of AI has signi\ufb01cantly trailed its uptake within the private sector, potentially due to considerations such as the effect of AI on human rights, political accountability, and its likely intensifying impact of existing power asymmetries (Kuziemski and Misuraca 2020). There is a growing interest in considering the ethical, political, legal, policy and organizational challenges while using AI within critical sectors such as healthcare (Sun and Medaglia 2019). Some sectors in government such as policing has seen uptake of AI \u2013 especially, Copyright \u00a9 2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. predictive policing AI (e.g., PredPol1) \u2013 notwithstanding emerging understandings that paint a more nuanced, and often negative, picture of trade-offs between bene\ufb01ts and harms (McDaniel and Pease 2021). Yet, in the light of growing acceptance of AI as a signi\ufb01cant technology, it has been forecasted that there would be enhanced adoption of AI within government, potentially freeing up one-third of public servants\u2019 time (Berryhill et al. 2019). In contrast to some other sectors of government, uptake of AI within the electoral process is very limited if not largely non-existent across most nations globally. We \ufb01nd it reasonable to assert that this situation is engendered by the negative impact that AI has been shown to have on public services, which have a high degree of implicit expectation to act fairly and responsibly. This viewpoint is also shared by The European Commission for Democracy through Law, who noted that digital technology can negatively affect the electoral process (European Commission for Democracy through Law (Venice Commission) 2020, paragraph 7), and in the abstract of a forthcoming article (Bender 2022). Potential uptake of AI within recidivism prediction (Larson et al. 2016), a task that sits on the fringes of judiciary, was largely responsible for attracting scholarly attention to biased AI operation and spawning research on fair AI, now a sub-discipline of bustling activity (Chouldechova and Roth 2020). Deeper levels of AI uptake (e.g., robot judges) have been argued against (Morison and Harkens 2019), and extant AI usage within China\u2019s court system has been viewed negatively (Wang 2020). The anti-poor consequences of AI usage within core governmental activity such as welfare application processing has been highlighted in popular literature (Eubanks 2018). In short, these point to signi\ufb01cant resistance towards AI usage within government. The uptake of AI within the media, the \u2018fourth pillar\u2019 of democracy, has also met with signi\ufb01cant criticism, especially in scenarios involving the overlap of AI-powered social media with electoral politics. These include the Facebook-Cambridge Analytica scandal (Hinds, Williams, and Joinson 2020), and the impact of echo chambers on political fake news around elections (Rhodes 2022). We believe that these headwinds to AI within core government sectors, and thus the electoral process, creates a 1https://predpol.com/ arXiv:2302.03774v1 [cs.CY] 7 Feb 2023 \fvery rare space, one for deliberations on the pros and cons of AI usage within elections prior to their actual and substantive usage. We view the electoral process as being made up of core and peripheral functions, a conceptual distinction that we use to position this work. The core function is that of administration of the election, generally ful\ufb01lled by public election bodies and authorities2. On the other hand, the peripheral functions involve an ecosystem of private actors such as candidates who engage in campaigning, agencies that commission and conduct opinion polls, and media who report on the elections. We view the core and periphery as a structural and super\ufb01cial dichotomy, and remark that peripheral functions (e.g., campaigning) are likely to be considered more important for elections, from a citizen\u2019s or civil society\u2019s point of view. The peripheral functions might be highly regulated by election bodies (e.g., \ufb01nancial and temporal limits on campaigning) to enable free and fair conduct of the core functions, and candidates\u2019 representatives may be allowed to have visibility of the core functions. The distinction between core and periphery, we note, aligns with the distinctions implied by Article 1, Section 4 of the US constitution which has been interpreted (Smith 2013) as giving the state enhanced authority to regulate the time, place and manner (i.e., the core) of elections, and implying a separation of the campaign (i.e., periphery) and the state. Scholarly deliberations around AI and elections have largely focused on the peripheral functions, and have recently centered heavily on the impact of disinformation and personalized content on elections (St\u02db epie\u00b4 n-Za\u0142ucka 2021), an area that has been in\ufb02uenced signi\ufb01cantly by AI (e.g., AIpowered fake news detection (Al-Asadi and Tasdemir 2022; P et al. 2021)). Some proposals in that space include deepening regulation (Marsden, Meyer, and Brown 2020) and understanding the multi-level challenges posed by deepfakes (Whyte 2020). There has also been popular coverage of AI usage in other peripheral activities such as voter education and manifesto matching3. Our focus, in this paper, is on the core functions of the election process and to critically analyze the extant or potential role of AI within them. Our interest is in the electoral process used to choose representatives in representative democracies. Outside scholarly literature, the usage of AI in elections has seen some recent debates and discussions. In particular, the 2022 European Conference of Electoral Management Bodies4, organised by the Venice Commission (the Council of Europe\u2019s advisory body on constitutional matters including elections, referendums, and political parties5), had its theme as Arti\ufb01cial Intelligence and Electoral Integrity. The conclusions drawn from the conference6 placed signi\ufb01cant 2e.g., members of the Association of World Electoral Bodies http://aweb.org/eng/main.do 3https://www.forbes.com/sites/markminevich/2020/11/02/7ways-ai-could-solve-all-of-our-election-woes-out-with-the-pollsin-with-the-ai-models/ 4https://www.coe.int/en/web/electoral-management-bodiesconference/emb-2022 5https://www.venice.coe.int/WebForms/pages/?p=01_ Presentation&lang=EN 6https://www.coe.int/en/web/electoral-management-bodiesemphasis on the interference of tech giants on elections, noting concerns of personalization and selective exposure, microtargeting and voter turnout, synthetic AI-generated data vis-a-vis human oversight, and the issue of disinformation. We note that these signi\ufb01cant concerns align more with the non-core electoral processes and are pertinent to the issue of regulating the information ecosystem. Further, a recent white paper (Heesen et al. 2021) and an associated spotlight article (Heesen 2022) detail the risks associated with the use of AI in the information ecosystem during elections, and how it could reduce the agency of the voters and sway individual voting decisions. In this paper, we consider \ufb01ve different avenues within the core electoral process within which AI currently plays or could potentially play a role in the near future. Where appropriate or necessary, we make the (potential) role of AI apparent by referencing related AI research. Our main goal is to map the ethical, social and political challenges that could be brought about by AI within those \ufb01ve avenues. While the overarching bene\ufb01ts of AI within those relate to ef\ufb01ciency improvements and savings in human labor, we observe that the cons of AI usage are of myriad types and intensities across the separate avenues. Accordingly, our critical analysis will place an enhanced focus on unpacking the negative rami\ufb01cations of AI within the separate avenues. Given our focus on AI, non-AI automation within elections such as voting machines (e.g., India, Brazil) and online voting (e.g., Estonia) are not within our remit in this paper. We also do not concern ourselves with the different types of electoral systems (i.e. \ufb01rst-past-the-post, proportional representation, alternative vote, proportional representation single transferable vote, etc), since that pertains only to the internals within the phase of vote counting. After the avenue-speci\ufb01c analyses focusing on the \ufb01ve chosen avenues, we will provide a high-level overview, and also brie\ufb02y discuss other potential avenues. AI usage Avenues within the Core Electoral Process We consider the following \ufb01ve representative avenues of AI usage within the core electoral process. \u2022 Voter List Maintenance and De-duplication \u2022 Determining Polling Booth7 Locations \u2022 Vulnerability-based Polling Booth Protection \u2022 Voter Authentication \u2022 Video Monitoring of Electoral Fraud While the \ufb01rst three avenues involve decision making before the actual elections, the following two relate to processes during the elections. While using the term AI, we use the contemporary interpretation that focuses on data-driven technologies based on machine learning and data analytics. conference/conclusions-2022 7Booths are used to refer to the location where a vote is cast, a predominant terminology in the Indian sub-continent. These are often referred to as polling stations in the UK, and polling places in the US. \fAvenue AI Usage Risks Pathways Voter List Maintenance Heuristic-driven Approximations Access-Integrity Trade-off Issues Access-focused AI Record Linkage Biased AI Reasonable Explanations Outlier Detection Overly Generalized AI Local Scrutiny Polling Booth Locations Drop Box Location Determination Business Ethos Plural Results Facility Location Volatility & Finding Costs Auditing AI Clustering Partisan Manipulation Disadvantaged Voters Predicting Problem Booths Predictive Policing Systemic Racism Transparency Time Series Motifs Aggravating Brutality Statistical Rigor Feedback Loops Fair AI Voter Authentication Face Recognition Race/Gender Bias Alternatives Biometrics Unknown Biases Bias Audits Voter Turnout Designing for Edge Cases Surveillance and Misc. Video Monitoring Video-based Vote Counting Electoral Integrity Shallow Monitoring Event Detection Marginalized Communities Open Data Person Re-Identi\ufb01cation Undermining Other Monitoring Table 1: An overview of our avenue-speci\ufb01c analyses. Figure 1: Sample of a card sent by Wisconsin to potentially inactive voters. The recipient has 30 days to take action and stay registered to vote. Pic: League of Women Voters (WI) Thus, traditional AI sub-disciplines such as planning and rule-based expert systems are outside our remit. Avenue-speci\ufb01c Analyses Within each of the above avenues, our analysis will focus on critically and qualitatively evaluating the context as well as extant or potential usage of AI technology. This will be followed by our analysis of potential risks, and discussions on risk-mitigating or alternative pathways. An overview of our analyses appears in Table 1. Voter List Maintenance The Context An up-to-date and error free voter list is critical to ensuring the integrity of elections (Merivaki 2020). Voter lists may be maintained at one of various levels of granularity viz., city/district, provincial or national levels. Further, not everybody may be eligible to vote in all elections. For example, in the Republic of Ireland, non-Irish citizens can vote in local elections but not in the presidential election, and EU citizens can vote in European Parliament elections but not national ones, or referenda8. Thus, the voter lists may be speci\ufb01c to the elections in question. Whether there are multiple or single voters lists, there are overarching considerations to ensure that they are up to date. Voters need to be removed from voter lists if they have moved, changed citizenship, or in the event of their death. Analogously, incoming voters to a locality would need to be added to the voter list. In contrast to removal, the onus of making an application for inclusion in the list is usually on the incoming voter, and the application is accepted as long as set out conditions are met and the voter is not already registered. Periodically, there may also be a need to ensure that voters are not registered multiple times on the same voter list, or across different states\u2019 voter lists. Extant or Potential AI Usage The data entry corresponding to an ex-resident sticking around in a voter list could affect electoral integrity (especially, if a false vote is cast in their name), whereas a legitimate voter being left out of the list could reduce the con\ufb01dence in the electoral process. Carefully balancing access and integrity is at the core of the task of voter list maintenance (Merivaki 2020). There has been little evidence of AI usage in voter list maintenance within scholarly or grey literature. There has, however, been a mention of the use of automated systems (e.g., name matching) to purge voter rolls in the US (Kayser-Bril 2022), details of which would be available in a forthcoming article (Bender 2022). In terms of non-traditional approaches to voter list maintenance, there have been cases of usage of administrative data (e.g., driving license data, postal service data) in Wisconsin (Huber et 8https://www.gov.ie/en/service/a3e81-voting-in-ireland/ \fal. 2021) to identify potential stale entries within the voter list, following which a postcard is sent to them to con\ufb01rm that they are still resident in the address. Fig 1 illustrates the format of a card sent out by Wisconsin to inactive voters, shifting the onus on the voter to then con\ufb01rm that they are still resident at the address. We note that harvesting evidence from administrative sources and using patterns therein to determine potential staleness is a paradigm that can be signi\ufb01cantly accelerated by AI. This is so since developing heuristics based on data patterns and harnessing them for faster decision making (often at some cost to accuracy) has been at the core of AI (Batchelor and Waltz 2001). In other words, this paradigm opens a channel for potential and substantive AI entry into the voter list maintenance process. We now outline streams of extant AI techniques which could be employed within voter list maintenance in the near future, based on our informed judgment. First, there has been a signi\ufb01cant body of computing literature within the data management and analytics communities on the task of record linkage (Christen 2012). These enable identifying duplicates or matching related records within a data source (e.g., voter list database) or across data sources (e.g., across voter list and driving license databases) using pattern-based heuristics. Identifying two similar entries in a voter list or identifying an entry in a voter list that matches with a driving license record for a different state could both be interpreted as pointing to potentially stale entries which need to be scrutinized further or removed. Second, techniques from the AI sub-discipline of outlier detection (Wang, Bah, and Hammad 2019) may be applied to identify \u2018atypical\u2019 entries in voter lists, which may be \ufb02agged for manual scrutiny. Such usage is inherently problematic in that it is driven by the lack of appreciation of diversity among voters, but we suspect that the rampant usage of outlier detection in data-driven \u2018smart\u2019 policing (Yamin, Shalaginov, and Katt 2020) (where similar considerations have been left unattended) would inevitably spill over to voter list maintenance. Extant or Potential Rami\ufb01cations We consider potential or extant rami\ufb01cations of AI usage in voter list maintenance at two levels viz., general and speci\ufb01c. At the general level, we draw the reader\u2019s attention again to the fundamental trade-off in voter list maintenance, that of balancing access and integrity (Merivaki 2020). While a focus on access would involve identifying the left-out voter, a focus on integrity would involve discovering fraudulent or stale entries in voter lists. The onus on the access issue is often left to individual voters to register themselves in time (and potentially, funded state campaigns to create public awareness), and thus, technological interventions on voter list maintenance are likely to focus on the integrity aspect. Data-driven AI is focused on leveraging available data sources such as voter lists in our scenario, and this makes it easier to conceptualize usage of AI to identify the fraudulent voter as opposed to the left-out voter who is absent in the voter list as it stands. In other words, deepening usage of AI in voter list maintenance is thus likely to lead to con\ufb01gurations that sit more towards the integrity side of the integrity-access trade-off. This may also be read within the backdrop of observations that extant AI usage has a dominant carceral-positive \ufb02avor (Katz 2020). At the speci\ufb01c level, we consider three representative issues. First, (Huber et al. 2021) observe that the paradigm of administrative data usage to identify fraudulent or stale voter list entries \u2013 even through manual processes \u2013 produce errors that engender a \u2018racial burden\u2019 where errors are observed more for minority ethnicity. This implies that AI simply reproducing human decision making could itself be problematic, since it could produce the same kind of racial biases, but under an aura of technological legitimacy. Second, the usage of pattern-based heuristics to identify stale voter list entries would be inevitably faster and deeper with AI, as compared to manual processes. Such a data-led approach could lead to producing myriad kinds of biases as is often observed in data-driven AI systems (Ntoutsi et al. 2020). Third, there have been observations of signi\ufb01cant local differences in voter list issues (Merivaki 2020). Such observations implicitly call for bespoke voter list maintenance techniques for speci\ufb01c localities to ensure effective working. We observe that this call stands in signi\ufb01cant tension with the AI focus on generalizability, which has been observed as the second major driving value in machine learning research (Birhane et al. 2022). In other words, normalizing AI for voter list maintenance could result in signi\ufb01cant disadvantage for localities whose voter list maintenance issues are divergent from the predominant trends at the national or global level. Pathways Forward We discuss options that would sway AI adoption in voter list maintenance towards reasonable and low-risk directions. First, the access-integrity trade-off and the observations that current AI could be more applicable at the integrity end, points towards a research gap, that of access-focused AI. These would involve developing AI techniques to identify omissions in voter lists, perhaps using external sources such as administrative data (e.g., drivers licenses, postal service data). Such potential omissions could then be \ufb02agged to send targeted pamphlet-based requests to enroll in voter lists. These could help aid a more inclusive voter list to deepen the democratic process. Second, within the realm of AI usage towards identifying likely fraudulent or stale entries in voter lists, requiring usage of explainable AI that would generate reasonable explanations could be an important safeguard against biased operation. Obviously, what constitutes a reasonable explanation would necessarily need to be sourced from experts in voter list issues, and the technological challenge of developing AI that could adhere to such constraints would lead to new directions for AI research in the area. Third, each jurisdiction that maintains a voter list would need to analyze the appropriateness of offthe-shelf voter list maintenance AI for its own local context, given previously referenced observations on local differences in voter list issues (Merivaki 2020). This requires that AI techniques for voter list maintenance be transparent in their operation (this is complementary to producing explanations for each decision, as discussed above), so they \fcan be scrutinized for applicability for each jurisdiction, and adapted easily to suit the speci\ufb01c conditions. Polling Booth Location Determination The Context Polling booths, as locations where the voter exercises their right to choose, could be regarded as the most important location of an election. The importance of geographical locations have been long understood, not least due to concerns around gerrymandering, i.e., the practice of redrawing electoral district boundaries to the advantage of speci\ufb01c political parties (Stephanopoulos 2018). Polling booth location determination is a \ufb01ner-grained task, that of determining where individual voters within an electoral district should go to exercise their franchise. While most polling booth locations are fairly static over decades if not centuries, there has been evidence from political psychology that the nature of the polling booth (e.g., a church vs. a secular building) could in\ufb02uence the vote (Rutchick 2010); this has been referred to as the polling place priming effect (Blumenthal and Turnipseed 2011). Further, the distance to the polling booth has been argued to have a nuanced relationship with voter turnout (Garnett and Grogan 2021). All these suggest the importance of determining polling booths reasonably to ensure free and fair elections. Yet, it needs to be observed that identifying polling booth locations is often a simple decision-making task that needs to be undertaken very infrequently. For example, the legacy polling booths may be schools (or church halls, as common in the West) which continue usage as polling booths for generations. In the unusual event that a regularly used polling booth is discontinued, the proximal school (or church) may be a natural replacement. The main potential context of automating the polling booth determination process could be in locations that experience signi\ufb01cant demographic churn due to inward or outward migration. Extensive inward migration might necessitate identifying newer booths for capacity considerations, and outward migration might require discontinuation of some legacy booths. Further, periodically, some jurisdictions may re-determine/change polling booths en masse viz., rationalization of polling booths in India9, or consolidation in US which led to recent accusations of bias in Texas10 (screengrab in Fig 2) and Georgia11. Extant or Potential AI Usage Based on an extensive search, we found no evidence of AI usage in determining polling booth locations within extant literature. The only computing method we could locate in this space is a recent work on the drop box location problem (Schmidt and Albert 2022), which considers computationally determining locations to place ballot drop-boxes, a 9https://www.thehindu.com/news/cities/Hyderabad/rationalisationof-polling-stations-ceo-seeks-time/article24609851.ece 10https://www.theguardian.com/us-news/2020/mar/02/texaspolling-sites-closures-voting 11https://www.reuters.com/article/us-usa-election-georgiaidUSKCN1L51ZP Figure 2: Screengrab from a March 2020 article in The Guardian on bias in polling booth closures in Texas. recent addition to the voting infrastructure in the US, especially during the COVID era. The work proposes an integer program to address the task. The paucity of AI work on the polling booth location problem may be regarded as unsurprising since AI (or any kind of automation) is often prioritized in cases involving repetitive decision making (which is not quite the case for the polling booth location problem), to allow that the upfront investment may be justi\ufb01ed over a reasonable period of time. However, the technological readiness of AI for usage in polling booth determination may be regarded as moderate to high, owing to signi\ufb01cant advances in related tasks. First, the classical task of facility location (Celik Turkoglu and Erol Genevois 2020) may be observed to be highly allied with the task speci\ufb01cation of polling booth determination. Facility location involves determining the locations for a (usually \ufb01xed) number of service centers (e.g., one of several types such as \ufb01re stations, post of\ufb01ces, cafes) in accordance with multiple (and potentially con\ufb02icting) criteria. For example, in the case of \ufb01re stations, the criteria to optimize for may include property value, population coverage and area coverage (Farahani, SteadieSei\ufb01, and Asgari 2010). A cafe chain may want to open a \ufb01xed number of new outlets to ensure accessibility to the maximum number of potential customers. When considering polling booths as facilities, area and population coverage are obviously pertinent criteria, in addition to other criteria such as public transport access. This speci\ufb01cation of facility location naturally yields to multi-criteria optimization. Second, another classical task, that of clustering (Xu and Tian 2015), considers grouping objects into clusters/groups such that objects that are similar to one another be assigned to the same cluster with a high likelihood. The usage of clustering for facility \fFigure 3: An illustration of clustering for facility location. For usage in polling booth determination, the locations could be locations of voters. Figure from (Lin et al. 2021) location is most appropriate when the predominant assignment criterion is that of location-proximity, when similarity may be judged as inversely related to spatial distance. For the case of polling booths, voters may be clustered into geoproximal clusters, beyond which each cluster (i.e., group of voters) may be allocated a polling booth; see Fig 3 for an illustration. Clustering algorithms have a long tradition that dates back to the 1960s (Jain 2010). The body of clustering literature is fairly versatile in that it includes algorithms that allow customized speci\ufb01cations for cluster shapes (e.g., spheres in k-means (MacQueen 1967)), making them customizable in ways that facility location algorithms may not yield to. Vanilla clustering formulations seek to minimize the sum of distances between each object and its cluster representative; for the polling booth task, this would correspond to minimizing the cumulative distance that voters would need to travel to reach their polling booth. As may be obvious, minimizing the cumulative distance could still leave some voters with a signi\ufb01cant distance to travel, to reach their polling booth. Recent advances in fair clustering (P and Abraham 2020; Jung, Kannan, and Lutz 2020; Abbasi, Bhaskara, and Venkatasubramanian 2021; Stepanov 2022) consider ensuring that no objects (voters) are left too disadvantaged in terms of distance to their cluster representative (polling booth). This and other advances in fair clustering (Mahabadi and Vakilian 2020; Kleindessner, Awasthi, and Morgenstern 2020) have extended the clustering literature to align more with considerations that are applicable for polling booth location determination. Extant or Potential Rami\ufb01cations We outline three potential risks based on our assessment of AI usage as exempli\ufb01ed above. First, facility location algorithms have traditionally considered scenarios within which at least one criteria is aligned with pro\ufb01tability or revenue. In fact, (Celik Turkoglu and Erol Genevois 2020) explicitly suggest that the overarching trade-off in facility location is one between pro\ufb01tability and sustainability (of the business in question). The proposed technique for the drop box location problem (Schmidt and Albert 2022) also notably considers cost as a dominant criterion to optimize for. As an example, pro\ufb01tability can be improved by locating a facility (e.g., a cafe) in a location that is likely to attract more footfall (e.g., preferring a high-street location) relative to competition. However, polling booths do not operate in a competitive market since each voter is uniquely assigned to a polling booth, and the electoral body has a monopoly over the polling process. Thus, signi\ufb01cant care must be used to ensure that business ethos does not creep into polling booth determination, and that off-the-shelf technology is customized enough to suit the task. Second, the availability of polling booth location determination technology may create an implicit urge to use it reasonably frequently, to extract value from the upfront cost of technology development or procurement. This may cause unfamiliar volatility in polling booth determination, with people seeing their assigned polling places change regularly. Such volatility in polling booth locations could increase the \ufb01nding costs (Brady and McNulty 2011) of polling places, and negatively affect voter turnout. Third, probably the most signi\ufb01cant concern is that of technology enabling \ufb01ne-grained gerrymandering, where polling locations (not electoral boundaries as in traditional gerrymandering) are manipulated for partisan objectives (we note here that there has been no evidence suggesting partisan manipulations of polling location yet (Shepherd et al. 2021)). Yet, we draw attention to recent research (Fitzsimmons and Lev 2020) proposing techniques for geographic manipulation of polling locations for partisan objectives. As an illustrative example, one of the tasks the paper considers is whether a set of polling locations can be chosen to ensure the victory of a speci\ufb01ed candidate, under certain (arguably strong) assumptions. Pathways Forward We now consider risk-mitigating and progressive ways of using technology for polling booth determination to align with the best interests of free and fair elections. First, identifying a set of polling booth locations is a collective optimization task where several criteria are simultaneously optimized for. Given the result of the optimization, i.e., a set of suggested polling booth locations, a human expert (e.g., an election of\ufb01cial) who considers one of the locations inappropriate would not be justi\ufb01ed in altering it slightly, as we will explain. The judgment of inappropriateness of a particular polling booth location may stem from any of a variety of extrinsic reasons (e.g., proximity to a garbage dump, unavailability of adequate toilet facility etc.) that cannot be mathematically abstracted and fed into the optimization. The process of altering a speci\ufb01c polling booth location slightly is inappropriate since even a slight change in one polling booth may necessitate changes in other booth locations to ensure that the result stays truthful to the collective optimization at play. For the reader familiar with the task of centroid clustering (e.g., k-means (MacQueen 1967)), a simpler collective optimization problem, the above scenario may be seen as analogous to the case where altering one cluster center could cause knock-on changes in other cluster centers and cluster allocations of even far \ufb02ung objects. This inability to legitimately modify the AI\u2019s result even slightly \fcould be seen as putting a human expert in a metaphorical straitjacket. Towards addressing this, AI techniques that produce a plethora of viable options may be developed, to allow the human to exercise due diligence and choose one from the viable options. Second, the advent of AI tools to enable technological manipulations of polling places (Fitzsimmons and Lev 2020) calls for techniques to safeguard against such manipulations. Such safeguards can themselves be operationalized as auditing oriented AI which will enable discovery of partisan manipulations. There are, however, risks that such auditing AI could be used within adversarial learning frameworks (Gui et al. 2021) to enable the development of more re\ufb01ned AI-based malicious manipulations. Third, any form of polling booth location determination, whether AI-based or human-driven, would lead to a subset of disadvantaged voters. These could include voters who experience an awkward combination of physical disability and moderate to high distance to their assigned polling location. A novel stream of AI that identi\ufb01es voters who are disadvantaged could aid efforts (e.g., prioritizing transportation help, or providing remote voting options) to foster more inclusive voting and offset the de\ufb01cits of the chosen polling booth con\ufb01gurations. Polling Booth Protection The Context Among the phenomena that threaten voters\u2019 exercise of voting franchise is voter intimidation, increasingly recognized as a global phenomenon (e.g., Guatemala (GonzalezOcantos et al. 2020), Russia (Frye, Reuter, and Szakonyi 2019)). Some jurisdictions have strong legal protections (Woodruff II 2011), but voter intimidation nevertheless remains existent. Other forms of infringement of voting rights include booth capturing (Sharma 1999) through violent means. These concerns have been used to justify preventive detention of potential criminals (Verma 2005) and designating speci\ufb01c polling places as problem booths to provide additional police protection12. Such categorization and density gradients in polling booth police presence and patrolling could in\ufb02uence electoral calculus. In this section, we consider prioritization of polling booth protection as an avenue of potential AI usage. Extant or Potential AI Usage From our literature surveys, we \ufb01nd that concerns of intimidation and violence at polling places are highlighted largely within the Global South (Besaw 2021), where AI penetration in the public sector has been traditionally low. This provides a backdrop into the non-existence (as far as we could assess) of bespoke AI techniques for polling booth protection prioritization. There has been emerging theoretical AI work on prioritizing protection of \u2018voter groups\u2019 to ensure result stability in elections (Dey et al. 2019; 2021), under conditions that we regard as quite synthetic; these may need much adaptation to be used within the task 12https://www.thehindu.com/news/cities/Kochi/276-problembooths-in-district/article33273456.ece Figure 4: Screenshot from PredPol illustrating crime-prone hot spots. Pic from https://www.neoteo.com/predpol-unaplataforma-para-predecir-el-crimen/ of prioritizing polling booths to protect. Our choice of identifying problematic polling booths as an avenue for AI usage is strongly motivated by observations of burgeoning AI development in a task that is very similar in spirit, that of hot spot policing that seeks to identify crime-prone locations, which we describe below. We brie\ufb02y outline the historical context and evolution of hot spot policing and illustrate how it resonates with the task of identifying problematic polling booths. The shift of focus in policing from people (i.e., criminals) to places has roots in the deployment of heuristics such as broken windows (Kelling and Wilson 1982) to identify crime-prone places. This was later mainstreamed \u2013 most prominently under the mayorality of Rudy Giuliani in New York, whose regime was praised for effective place-based policing (Langan and Durose 2004) amidst scathing critiques focused on racial bias (Noel 2007). This was followed by strong scholarly arguments in favor of increased normalization of hot spot policing (Sherman and Weisburd 1995). Today, hot spot policing is embedded strongly within the umbrella of predictive policing pervasively within US and Europe through software such as Predpol (Geolitica13) and Palantir Gotham14. An illustrative screenshot from PredPol has been included in Fig 4. We observe that predictive policing is built strongly on the principles of pre-crime (McCulloch and Wilson 2015) (the idea that the occurrence of a crime can be anticipated before it happens) and the primacy of place (as opposed to criminals, victims and temporality) in crime. These, we note, are precisely the premises upon which the idea of predicting problem polling booths are founded upon. Predictive policing techniques leverage historical patterns of crime and indicative cues, and this broad working philosophy makes their building blocks likely to be adapted by corporate software vendors to expand into the electoral AI market. Problematic polling booth prediction is a task that has its own nuances, and thus, bespoke methods for the same may leverage advances in related sub-disciplines of data-driven 13https://geolitica.com/ 14https://www.palantir.com/platforms/gotham/ \fAI. For example, a pre-compiled set of tell-tale patterns that are indicative of impending polling place violence may be leveraged for automated discovery in time series streams, a task which may be cast within the framework of time series motif discovery (Mueen et al. 2009). Extant or Potential Rami\ufb01cations Given the absence of extant usage of AI in problematic polling booth prediction, we focus on potential rami\ufb01cations in using AI for the task in the future. Density gradients in policing of polling booths could have direct and indirect rami\ufb01cations. As an example of a direct risk, observe that heavily policed polling booths could repel voters, reducing voter turnout; it was observed that police raids reduced turnout in Spain in 2017 (Rodon and Guinjoan 2022). Thus, it may be in the interests of the incumbent party to incentivize AI that designates areas where they expect lesser support as problematic, as a way to indirectly and silently sway electoral results in their favor. Besides this speci\ufb01c effect, there have been broader observations that policing of elections would in\ufb02uence electoral outcomes (Verma 2005). Based on our conjecture that likely usage of AI for the task would re\ufb02ect the patterns of usage of predictive and hot spot policing, the current understandings of risks within that realm provides a vantage point to ponder about the indirect risks for predicting problematic polling booths. It is also notable that the latter task, much like hot spot policing, is within the public sector, making learnings from hot spot policing likely pertinent for our task. Hot spot policing in particular, and new policing in general15, has been argued to aggravate systemic racism within the American context (Braga, Brunson, and Drakulich 2019) through subjecting minority neighborhoods to higher levels of policing. Other observations link an aggravation of police brutality to hot spot policing; for example, an article in the Hill16 says: \u2018The epidemic of police brutality \u2013 primarily affecting black males \u2013 can be linked to the history of a technique called hot spot policing, ...\u2019. This has led to responses with skepticism even from the most vocal pioneers of the technology (Weisburd 2016). Applied AI has often been argued to have a performative (cf. descriptive) aspect, where they reshape the very phenomenon they are supposedly modelling (McQuillan 2022). The observation of runaway feedback loops in predictive policing (Ensign et al. 2018) foregrounds this performative aspect of policing AI which, we surmise, could spill over in myriad ways to the use of similar technology in predicting problematic polling places. Pathways Forward We \ufb01nd it hard to envisage ways in which AI could be used in a reasonable manner to predict problematic polling places. Firstly, we observe that the nuanced nature of booth capturing makes it hard to predict using mathematical abstractions and statistical models. Secondly, given that AI cannot be expected to be foolproof, any errors made by AI in choosing 15a phrase used to refer to technology-driven policing 16https://thehill.com/blogs/congress-blog/civil-rights/265795police-brutality-is-not-invisible/ problem polling booths would arguably have a needless indirect in\ufb02uence on electoral outcome. Yet, in cases where AI is used to prioritize police force deployment for polling booth protection, there could be ways to use it in transparent and principled ways to mitigate loss of con\ufb01dence and trust among the public in the prioritization process. Since transparency and explainability have been discussed previously, we do not delve into those details again. We note that the task of spatial hot spot identi\ufb01cation has been explored, in parallel to data mining methods, within the realm of statistical theory; a survey of related techniques appears in (P 2016). In particular, the spatial scan statistic (Kulldorff 1997) has spawned a statistically principled family of methods. These, while being principled in deploying the usage of rigorous and bespoke statistical signi\ufb01cance tests, suffer from poor scalability and ef\ufb01ciency, explaining their limited uptake within hot spot policing techniques. The statistical rigor in those methods would limit the extent of errors in predicting problematic booths. Further, there has been emerging literature on demographic fairness in hot spot detection (P and Sundaram 2022), with applications to hot spot policing. The overall idea is to ensure that the collective population across areas judged as problematic are demographically diverse and representative of the broader population. Attempts at blending transparency, statistical rigor and fairness principles in developing techniques for problematic polling place identi\ufb01cation could mitigate some of the risks discussed earlier. Voter Authentication The Context Ensuring that the vote is cast by legitimate voters is obviously of paramount importance to the integrity of the electoral process. Most electoral processes globally require some form of authentication at the polling place, such as a requirement that voters bring an identi\ufb01cation document while visiting the polling place. While most focused studies have found very little evidence of impersonation (James and Clark 2020; Ahlquist, Mayer, and Jackman 2014), accusations of voter fraud are often used by political parties to discredit the electoral process and outcomes, to good media attention (Fogarty et al. 2015). Such accusations exert pressure on democratic institutions to reinforce public trust in electoral processes through continuous attention to voter authentication. Within the context of online voting (e.g., Estonia), the usage of a digital id for voter authentication is protected by strong and robust encryption technologies17. Voter authentication within postal voting is multi-modal and implicit since the voter needs to make a request following which the voting package is sent to the registered address18. Our focus, in this section, is on the use of AI-based voter authentication methodologies at the physical location of the polling place. 17https://e-estonia.com/solutions/e-identity/id-card/ 18https://en.wikipedia.org/wiki/Postal_voting_in_the_United_States \fFigure 5: Usage of Face Recognition for 2020 Elections in Telangana (India). Pic from https://www.theweek.in/news/india/2020/01/23/India\ufb01rst-poll-using-face-recognition-app-conducted-peacefullyin-Telangana.html Extant or Potential AI Usage The use of AI for voter authentication, based on our literature search, is found to be very scarce. One of the notable usages has been within the context of a face recognition pilot in the 2020 Telangana municipal elections in India (Allie 2023); a picture from a media report on the same appears in Fig 5. There have been academic prototypes focusing on video veri\ufb01cation of voter identity (Dunbar et al. 2015). However, this sparse uptake sits in very sharp contrast with AI literature which boasts of a plethora of person identity veri\ufb01cation and fraud detection mechanisms using techniques such as face recognition (Li et al. 2020), \ufb01ngerprint recognition, iris recognition (Mostofa et al. 2021), retina scans, and counterfeit document detection (Berenguel Centeno et al. 2019; Berenguel Centeno 2019), among others. Given observations of improvements in document forging and an increasing realization that the promise of tamperproof of\ufb01cial documents remains a holy grail (Baechler 2020), it may be reasonable to expect a shift of focus from voter authentication based on identity documents to biometric based authentication. While it is clear that AI-based voter authentication is likely to see an enhanced interest in the times to come, a key aspect is whether there is an emerging data infrastructure that can enable the deployment of these technologies at the population scale. Most nations would have photos collected as part of voter enrolment, and thus have (potentially stale) national-level citizen photograph databases. These may not be suf\ufb01cient, on their own, to implement and deploy facial recognition; it is in this context that deeper forms of national-level data collection gain importance. Considering the case of India, the largest democracy by population, there has been an emergence of biometricsbased database called Aadhar used as a gateway to access several services (Rao and Nair 2019). Aadhar records \ufb01ngerprints, iris scans and photographs of each citizen, and boasts of a coverage of 95% of the population19. Aadhar is routinely used to access public services through \ufb01ngerprint scanners. While Aadhar is separate from the voter\u2019s identity card, there has been a recent initiative to link voter cards with Aadhar20 which increases the readiness to use biometric authentication in elections. There has been much enthusiasm in academic circles to design Aadhar based biometric and digital (e.g., one-time-password) based authentication mechanisms in elections (Roopak and Sumathi 2020; Jain et al. 2023). While Aadhar is signi\ufb01cant due to the scale, other biometric-based population databases have emerged in recent times, of which Indonesia\u2019s e-ktp (Kevindra Darwis and Lim 2011) system is an example. These indicate that AI usage for voter authentication may be expected to be piloted widely in the near future. It also needs to be mentioned that there is signi\ufb01cant resistance to national ID cards in the West, and protests have caused large-scale biometric based ID projects to be shelved to postponed (details in (Ramakumar 2010)). Extant or Potential Rami\ufb01cations We now consider the myriad risks of using AI-based authentication mechanisms for voter identi\ufb01cation. First, the issue of race and gender bias of facial analysis software was highlighted to much media attention in 2018 (Buolamwini and Gebru 2018). It was shown that intersectional groups (e.g., black women) suffer very high rates of error (up to 34%), whereas light-skinned males often record accuracies in excess of 99%. To put this in perspective, observe that this could lead to signi\ufb01cant numbers of women, minorities and intersectional groups wrongfully denied entry through the AI-based authentication mechanism (to fall back to manual authentication processes), inconveniencing and alienating them from the electoral process. Second, beyond facial analysis, biometric recognition systems such as \ufb01ngerprint, iris and retina-based ones have seen very limited experimental analyses on possible bias in behavior. Fingerprint systems, the more popular among the above, have been observed to exhibit some amounts of demographic bias (Marasco 2019). Fingerprint based authentication in Aadhar has been reported to have high prevelance of authentication errors among manual labourers in a small-scale survey (Bhatti 2012); large-scale studies need to be done to con\ufb01rm whether these are chance occurrences or systematic bias against labourers whose \ufb01ngerprints may have faded beyond the margin of error (as reported in the case of beedi workers21). Third, there is a risk that usage of enhanced technology could reduce voter turnout, especially among marginalized communities. The face recognition pilot in Telangana (India) in 2020 recorded statistically signi\ufb01cant reductions in voter turnout, amounting to more than 6 percentage points. The study analyzing this (Allie 2023) 19https://government.economictimes.indiatimes.com/news/digitalindia/95-of-people-have-aadhaar-and-use-it-once-a-month-onaverage-report/72236213 20https://en.wikipedia.org/wiki/Aadhaar#Drive_to_link_Aadhar_with_Voter_ID 21https://pulitzercenter.org/stories/fading-\ufb01ngerprints-beediworkers-india \foutlines three potential explanatory reasons; logistical issues, shifts in fraudulent activity and increased alienation of marginalized religious communities (in this case, Muslims). The last concern is of signi\ufb01cant importance and points to the potential embedded within such technologies to enhance the power of the state apparatus to engineer electoral calculus. Fourth, there are broader concerns with the technological solutionism (the belief that every problem can be solved by technocratic means) embedded within the urge to use AI-based biometric authentication, as outlined in (Gelb and Diofasi 2019). The article says that biometric elections need not necessarily enhance credibility and fairness of elections, and in some cases could even undermine it. Further, the enhanced resourcing diverted to maintaining up-to-date biometric authentication for election usage could be used to deepen surveillance towards undermining individual freedoms and political rights (Hosein and Nyst 2013). Pathways Forward Much like the case of polling booth protection, we \ufb01nd it hard to envision a reasonable usage of AI for voter authentication, especially in the case of biometric-based authentication. Yet, we deliberate on the possible pathways to mitigate risks in the usage of AI-based biometric authentication of voters. First, we observe that non-technological administrative safeguards ought to be instituted to ensure that biometric authentication does not alienate (sections of) voters, through instituting other parallel options. As recommended in (Bhatti 2012), biometrics should not crowd out other authentication options and become a de facto compulsory alternative. Second, extensive auditing of biometric identi\ufb01cation using carefully curated benchmarking (e.g., FRVT (Grother, Ngan, and Hanaoka 2019)) could help reveal biases to help direct policy and technological effort towards addressing them to mitigate biased operation. However, these have been warned as creating a false sense of fair AI, and concealing deeper concerns (Raji et al. 2020), something to be watchful of. Third, the unknown nature of the biases embedded within AI-based biometric authentication (since they have not been widely deployed at population-level) is a risk that deserves signi\ufb01cant attention, towards developing inclusive authentication mechanisms. We note that many biases stem from the attention to the typical user in technology design, an ethos that stems from the commercial development of AI. AI development often makes use of personas (a popular product design tool (Pruitt and Adlin 2006)), involving characterizations of typical users. The usage of personas ensures that the technology works well for the typical users, an important criteria for commercial products. However, within an election-focused authentication AI, the core interest is in ensuring that nobody is left out, which could be operationalized by focusing on identifying failure-prone \u2018edge\u2019 cases, and making sure that the technology works for them too. Such edge-case thinking (Wachter-Boettcher 2017, Ch. 3) could help foster higher levels of inclusivity and fairness in AI-based authentication. Video Monitoring of Electoral Fraud The Context Monitoring elections for proper conduct is seen as an important measure to ensure free and fair elections. In particular, as noted in (Hyde and Marinov 2014), monitoring ensures that there are ways to verify or vet post-election allegations of electoral fraud. The most common monitoring is that by independent international observers, but there could be bodies that monitor domestic elections, such as the Free and Fair Election Network (FAFEN22) in Pakistan. Independent of the presence of electoral observers, electoral bodies may try to further electoral transparency through modern monitoring technologies such as CCTV/surveillance cameras, which have been seeing increasing global role in surveillance systems. CCTV systems have been argued to be more effective than monitoring by human observers while also serving to deter electoral fraud (Obeta et al. 2021). In contrast to electoral monitoring by observers, CCTV based monitoring, given its inherent data-oriented nature, enhances the role that AI can play in election monitoring, which is what makes this a topic of interest for this paper. The emergence of scalable AI that can identify events in real-time have been increasingly used in other contexts such as institutional security (Kakadiya et al. 2019), home alert systems (Liang et al. 2021), and health-motivated monitoring of elder citizens (Huang et al. 2018); this makes the video-based election monitoring context a fertile avenue for AI usage. Extant or Potential AI Usage While there has been signi\ufb01cant deployment of CCTV based infrastructure in polling places globally, e.g., Russia (Asmolov 2014) and India23, there have been largely used to illustrate transparency and deter electoral fraud, or for citizen monitoring of elections. There is very limited public information available about how AI has been used over CCTV data streams, either in real-time or for post-election analyses. Among evidence of extant AI usage, an invited talk at an AI for elections workshop in 2021 (Gupta 2021) mentions usage of AI and video analytics to verify number of votes counted by analyzing data from CCTVs within Bihar state elections in India. A news article (Ahaskar 2021) throws more light into the usage of AI within the Bihar polling process. It suggests that video captures of the Electronic Voting Machine is performed and analyzed using OCR technology to do a technology-based parallel counting of votes much before the actual post-election vote counting process. The article also suggests real-time usage of AI-based analytics to identify any discrepancy on vote counts to alert polling of\ufb01cials. The proprietary technology, codenamed JARVIS24, is seen to be technology that is used for a variety of applications, and not a bespoke technology stack targeted at polling analytics. This is illustrative of how extant AI-based video analytics technology could transfer over to usage in 22https://fafen.org/ 23https://theprint.in/politics/uttarakhand-cctv-monitoring-atpolling-booths-security-increased-for-polling-day/830178/ 24https://www.staqu.com/#what_jarvis_is \fFigure 6: Page from VMukti, describing an election focused CCTV monitoring offering. Pic from https://www.vmukti.com/solution/election-surveillance/ polling booths, which forms our focus on the discussion of potential AI usage. It is also notable that there is an emerging prevalence of election-targeted video analytics solutions e.g., VMukti25 as illustrated in Fig 6. AI-based video analysis technology is most mature for the context of public safety, where it is largely developed for usage over data streams from surveillance cameras in public places. In 2016, it was reported that the average UK resident is captured across 70 cameras daily26, indicating the pervasive usage of video monitoring in contemporary societies. While we cannot summarize the extensive body of work on surveillance video analytics for public safety within this section, we provide some representative examples prioritized on their potential usage within the context of video-based electoral monitoring. One of the prime usages in public safety is towards identifying interesting events, along with their what, when and where of each event (Zhang et al. 2019). Certain types of events, such as violence (Ramzan et al. 2019), may be directly useful for usage over video streams from polling booths. More general AI for anomaly detection over videos (Sultani, Chen, and Shah 2018) could also be leveraged as-is in polling booths to \ufb01lter video streams for downstream manual analysis. It is conceivable that supervised ML models developed for public safety could deliver much higher accuracy for the polling booth setting, given the controlled and predictable environment of the polling booth. Of late, there has been a significant surge of interest in person re-identi\ufb01cation (Ye et al. 2022), the task of tracking the same person across multiple video streams (e.g., tracking a criminal\u2019s movements around the city captured across a plurality of surveillance cameras), as illustrated in Fig 7. Person re-identi\ufb01cation is well-suited in developing a technological solution against multiple voting27, a well-understood electoral fraud where the same person casts votes multiple times, violating the equal representation principle28. 25https://www.vmukti.com/solution/election-surveillance/ 26https://www.cctv-surveillance.co.uk/news/uk-person-cctvcameras-70-times-day-research/ 27https://www.aec.gov.au/About_AEC/Publications/backgrounders/fraudand-multiple-voting.htm 28https://en.wikipedia.org/wiki/One_man,_one_vote Figure 7: Illustration of Person Re-identi\ufb01cation across two cameras. Pic from Rapid-rich Object Search Lab, NTU (SG) Extant or Potential Rami\ufb01cations The rami\ufb01cations of usage of AI-based video monitoring of elections are numerous. First, the usage of technologies like JARVIS (referenced earlier) that integrate OCR with video processing to enable \ufb01ne-grained electoral monitoring and even parallel candidate-speci\ufb01c vote counting pose serious questions relating to electoral integrity. Given that the voter and thus their identity is also captured, this could be seen as a pathway towards an explicit and apparent violation of secret suffrage29 (aka secret ballot or secret voting), a core principle of voting systems enshrined within the UN Declaration of Human Rights (Article 21.330). The usage of proprietary software for monitoring and analytics risk leaking sensitive electoral information to private parties31 outside the remit of the electoral bodies who bear sole responsibility for the conduct of the elections, which is another risk to electoral integrity. Without going into details and consequences, we note here that real-time candidate-speci\ufb01c vote counting could pose very grave consequences to electoral integrity by creating privileged information that could violate expectations of transparency in vote counting32. Second, usage of video surveillance technology, as observed in the case of face recognition (Allie 2023), has the potential to alienate marginalized communities. An emerging trend of \u2018protest surveillance\u201933 employs these very same technologies (e.g., video surveillance and analyses); against this backdrop, marginalized communities that engage in protests may \ufb01nd electoral surveillance much more alienating. There is also a broader context of studies on how 29https://en.wikipedia.org/wiki/Secret_ballot 30https://www.humanrights.com/course/lesson/articles-1925/read-article-21.html 31Even if the analytics is hosted within state-owned infrastructure, some data-based feedback, at least in the form of errors detected, would need to be passed on to the private vendors to build the next generation of the solution. 32https://followmyvote.com/importance-of-transparency-invoting/ 33https://www.cnbc.com/2020/06/18/heres-how-police-usepowerful-surveillance-tech-to-track-protestors.html \fcontemporary surveillance technologies are informed by the long history of racial formation and policing of black lives in the US (Browne 2015), which highlights that there could be nuanced implications of video monitoring on marginalized communities. Third, as a minor point, we observe that mainstreaming of video-based surveillance may undermine other forms of more procedurally rigorous election monitoring methods such as in-person visits by independent observer panels. Pathways Forward As outlined earlier, one of the fundamental risks of \ufb01negrained video monitoring of elections is the threat it poses to electoral integrity principles such as secret suffrage and transparency of vote counting. This is in addition to enhanced alienation of groups of voters brought about by the presence of the surveillance technology itself. The main purported bene\ufb01ts, on the other hand, are to deter electoral violence and fraud while effectively detecting unusual activity that merits manual attention. Given this context, the pathways in deploying technological election monitoring, if such solutions need to be pursued, should ideally negotiate a space that is compatible with electoral integrity while realizing the bene\ufb01ts of detecting electoral violence and other unusual activities. First, a pathway towards complying with principles of electoral integrity would be to institute only shallow forms of monitoring in lieu of full-blown video monitoring. Pathways within this stream could involve using audio monitoring, \ufb01ne-grained motion detection sensors, or very low-resolution cameras (that may capture only blurred images which can help identify movements, but not individual people or their identities), or a surveillance setup involving some or all of the above. Such shallow monitoring would not allow packaging the solution with technologies such as OCR (recollect JARVIS) or face recognition since those would not work with blurred video streams, mitigating the threat to electoral integrity. On the other hand, bespoke computer vision and pattern recognition technology would need to be developed in order to ensure effective violence and anomalous events detection over such streams, leading to new directions in AI research. If multiple sensing modalities (e.g., audio, motion, blurred videos) are involved, technologies for correlating evidences across such multi-modal streams would need to be developed to ensure that the bene\ufb01ts of the system are realized to a reasonable extent. As may be obvious, a reasonable \ufb01rst step towards enabling or kickstarting this direction would be legislative or regulatory viz., a legal regulation disallows the use of fullblown video monitoring in jurisdictions where they are currently not outlawed. Second, shallow monitoring comes with a challenge, that of ensuring that citizens understand and appreciate that such monitoring would not allow identi\ufb01cation of individuals. This trust issue, while a non-technical challenge, needs to be resolved satisfactorily in order to mitigate the risk of alienating citizen groups and marginalized communities. One solution path would be through open data initiatives, whereby all such monitoring data is made publicly available through (potentially real-time) web streams, to illustrate the shallowness of the monitoring in the open. This could encourage hacktivism based experiments, which could highlight any issues, opening up a pathway towards continuous auditing and feedback-based improvement of the shallow monitoring solutions. Open data could also encourage citizen audits of elections; a notable case comes from Russia where citizen analyses of CCTV footages foregrounded instances of ballot stuf\ufb01ng34. Visibility of AI Usage We now turn our attention to the visibility of AI usage to the voters and the general public, an important consideration. As an example, usage of technological voter authentication (e.g., using face recognition) is explicitly visible, whereas the usage of AI in voter list cleansing is likely visible only when process documents are available in the public domain. Visibility is an important consideration since visibility ensures public scrutiny, making low visibility usage more risky. In other words, usage of AI in a medium risk manner with very low visibility could potentially have worse long-term consequences than high risk high visibility AI usage. We brie\ufb02y review our \ufb01ve avenues of interest using the perspective of analyzing visibility of (extant or potential) AI usage. Voter List maintenance is naturally a back-of\ufb01ce task which is carried out by election of\ufb01cials in between elections, with a usual surge in activity in the run-up to elections to ensure election readiness. Given that this is a back-of\ufb01ce task, usage of AI within it is not directly visible to the public. Regulatory constraints that may require that a reason be given to voters from whom con\ufb01rmations are sought may still not expose the AI usage, given the vast literature of posthoc explainable AI. Post-hoc explainable AI methods such as LIME (Ribeiro, Singh, and Guestrin 2016) are capable of providing an automated explanation for any decision (in this context, the decision by a classi\ufb01er to choose to seek con\ufb01rmation from a particular voter), making it easy to con\ufb01rm to regulatory constraints on explanations without exposing the usage of AI. Unless there is a regulation that the use of AI within decision making needs to be transparent (or unless these fall under right to information regulations), the visibility of usage of AI within voter list maintenance could be very limited. The usage of AI to determine polling booth locations and identifying problem booths, as in the case of voter list maintenance, could also be done under very limited visibility. Given that polling booth determination and problem booth identi\ufb01cation involve much fewer decision making instances (of the order of the number of polling booths) and further manual inspections are necessary to arrange polling booth logistics or deploy additional patrols, this would likely have a substantive human component in the decision making pipeline. This makes any AI inputs into the process quite opaque. In contrast to the above discussed avenues, voter authentication is an arena where AI usage would inevitably have high visibility. The usage of biometrics for authentication 34https://www.huffpost.com/entry/russia-election-2012-ballotbox-stuf\ufb01ng_n_1321379 \fwould require polling of\ufb01cials to procure equipment and inform voters, so that they can be effectively deployed. In the case of facial recognition, most current algorithms would require capturing stable photos, as seen in Fig 5. If the technology matures to an extent where CCTV footages can be used for face-recognition based voter authentication, the visibility of AI usage may reduce. However, as of now, the usage of AI within the task of voter authentication is accompanied by high visibility to voters. Video monitoring, the \ufb01fth among our chosen avenues, involves the usages of cameras and connectivity infrastructure. This makes it a high visibility usage of AI since the observant voter would likely easily spot its usage. The emergence of small and invisible spy cameras pose a risk; however, we believe it is unlikely that electoral bodies would be keen on using such \u2018tricks\u2019 to reduce visibility. Overall Summary We now summarize the discussion so far to provide a quick high-level view of our assessments and impressions. Table 2 provides an overview of our evaluation of each of the \ufb01ve considered avenues in terms of technology readiness, risk level and visibility of AI usage. While these are not objective assessments, but based on an information evaluation of the current state of play, these are to be taken with caution. Yet, we illustrate the rationale behind some assessments in the interest of exposing our thought process. AI technology that is aligned with determining polling booth locations is available across different streams, but we assess that such technology would need non-trivial adaptation for usage in the task; this reasoning leads us to evaluate the technology readiness to medium. On the other hand, hot spots policing is more or less readily usable for the task of identifying problem booths leading to a technology readiness assessment of high. The risk level of using AI in voter list maintenance and polling booth determination is assessed to be medium due to the virtual inevitability of using them only in a human-inthe-loop manner, ensuring some amount of expert oversight. In contrast, the usage of video monitoring, as we saw earlier, could sit in sharp tension with principles of electoral integrity; this is acknowledged through the very high risk level in the table. Given the discussion of visibility in the preceding section, we do not elaborate on that here to avoid repetition. Other Avenues of AI Usage While our choice of \ufb01ve avenues of AI usage was intended to give a general picture of the horizons in this area, there are obviously other aspects within core electoral processes where AI could play a role. We brie\ufb02y consider a few such areas, to serve as some directions for future reading for the interested reader. First, there are certain areas where peripheral electoral processes (recollect our core-peripheral distinction) could feed into the core processes. One such arena is that of opinion polls and exit polls. While opinion polls date back to two centuries (Tankard Jr. 1972), exit polls have been used for half-a-century (Moon 2022). These have become regular permanent and important \ufb01xtures that the public care about, especially for elections in Western liberal democracies. Of late, with the failures in predicting the Brexit referendum and the 2016 US elections35, popular expectations on accuracies of pollsters\u2019 numbers may not be at a high. However, if con\ufb01dence in such polls rebound, there is potential for narratives that view them as ground truth, something that the actual result ought to align to. This could deliver fertile ground to use deviations between actual results and opinion polls to determine which booths or areas should be subject to recounting. There are extant technologies that are aligned to this task (Fish and Reyzin 2017). In certain electoral systems such as Mexico (Mendoza and Nieto-Barajas 2016) and France36, the election authorities use statistical sampling to provide a quick preliminary result. Similar quickcounts or parallel vote tabulation are growing in popularity in Africa37. Technocratic solutions for selective recounting could propose usage of deviations between the actual count and such quick counts (in lieu of or in addition to opinion/exit polls), to determine which areas or booths should be targeted for recounting. As argued in (Aparicio 2006), such divergence-based determination of selective recounting could be biased, and random recounts are arguably safer. Second, the COVID-19 era has seen an increased usage of mail-in ballots, especially so during the 2020 US Presidential elections when 65 million voters exercised their franchise through mail-in ballots38. The National Vote at Home Institute, a postal ballot advocacy group, found major de\ufb01cits in infrastructural readiness for large-scale mail voting (McReynolds et al. 2020). These include 15 states that lack steps to verify voter addresses, and 17 states that do not mandate a signature veri\ufb01cation process. Within many states, it was reported that signature mismatches could lead to disqualifying the vote39 without the provision of a chance to \ufb01x the mistake. It has been pointed out (Kayser-Bril 2022) that many US states use machine learning for signature veri\ufb01cation. It has been also noted therein that machine learning based signature veri\ufb01cation systems may not have a high enough accuracy rate, and that their usage could be a source of disenfranchisement with a demographic gradient since most systems are trained on native English speakers. It is, however, not clear as to whether mail-in voting would still continue to be used on a large scale beyond COVID-19; however, if they become institutionalized, AI-based signature veri\ufb01cation is an avenue of potentially intense activity. Third, social media has come to be recognized as a major player in elections, especially with the surfacing of the Cambridge Analytica scandal (Hinds, Williams, and Joinson 2020). The scandal itself relates to supercharging elec35https://www.scienti\ufb01camerican.com/article/why-polls-weremostly-wrong/ 36https://www.thelocal.fr/20220407/how-do-the-frenchproduce-such-accurate-early-election-results/ 37https://en.wikipedia.org/wiki/Quick_count 38https://www.pewresearch.org/fact-tank/2020/11/10/mostmail-and-provisional-ballots-got-counted-in-past-u-s-electionsbut-many-did-not/ 39https://www.latimes.com/politics/story/2020-10-13/signatureveri\ufb01cation-rules-lawsuits \fAvenue Technology Readiness Risk Level Visibility of AI Usage Voter List Maintenance HIGH MEDIUM LOW Polling Booth Locations MEDIUM MEDIUM VERY LOW Predicting Problem Booths HIGH HIGH VERY LOW Voter Authentication VERY HIGH HIGH VERY HIGH Video Monitoring VERY HIGH VERY HIGH HIGH Table 2: Overall Summary of Key Assessments. toral campaigns, one which we regard as outside the core electoral process. However, the role of Cambridge Analytica (CA) in Nigeria (2015)40 and Kenya (2017)41 ignite much serious concerns. In both cases, CA worked in favor of the incumbent leaders, which raise concerns about whether they had access to privileged government information, especially relating the personal \ufb01nancial and medical records of opposition leaders, as noted in the Guardian article above. Broadly, CA\u2019s systems were later found to have actively interfered in electoral processes and worked against social cohesion (Nyabola 2018; Mohamed, Png, and Isaac 2020). Broadly, access to privileged information could provide unforeseeable pathways for such tech giants to in\ufb02uence core electoral processes. In addition, the use of social media insights to inform decision-making within core electoral processes provides another facet of potential AI-driven interference. Discussion Our focus so far has been to show how what we regard as core electoral processes could be seen as fertile ground for AI usage, to illustrate the risks that such usage may pose, and to initiate deliberations around pathways that could mitigate the risks. We now consider some broader contexts. The period after the 2008 \ufb01nancial crisis saw the emergence of austerity, or slashing public spending, as an attractive proposition (McQuillan 2022). This has arguably fueled the uptake of AI within governments, as a new form of automation that will enable making newer forms of cuts to public spending. It is in this context that one would need to examine whether AI is simply automation, or something beyond. Gigerenzer (Gigerenzer 2022, Ch. 4) illustrates how AI-based decision making has a character that is sharply different from human decision making. This has been long known, though hardly spoken of in that way, through the nature of adversarial attacks such as one-pixel attacks (Su, Vargas, and Sakurai 2019); one-pixel attacks vary an image by just one pixel, a change that a human eye can barely make out, to affect a change in the decision made by the classi\ufb01er. This difference in the nature of decision making re\ufb02ects in the fact that humans and AI make very different forms of errors in undertaking the same task. This resonates with the 40https://www.theguardian.com/uknews/2018/mar/21/cambridge-analyticas-ruthless-bid-to-swaythe-vote-in-nigeria 41https://advox.globalvoices.org/2017/11/03/data-anddemocracy-what-role-did-cambridge-analytica-play-in-kenyaselections/ observations in (McQuillan 2022) that usage of AI alters the form of decision making. Given the enthusiasm towards AI usage within governments, such AI adoption is done without any democratic debate. This amounts to altering the social contract structurally without transparency or democratic deliberations. Radical structural changes to social contracts within algorithmic governance have been noted in UK welfare policy under the veneer of \u2018total transformation\u2019 and the \u2018digital welfare state\u2019 in the post-Brexit era (Alston 2018). AI as a means of altering laid-out processes latently without democratic debate could cause signi\ufb01cant changes within the realm of electoral processes, with far-reaching consequences to the foundations of democracy. There have been observations on the increasing corporatization of AI research (Ahmed and Wahed 2020), with trends pointing to an increasing share of industrial research featuring within the top AI avenues in recent years. While signi\ufb01cant AI research has been taking place within public universities, governments have been increasingly relying on private solution providers for AI usage within core functions, such as policing and migration (Naranjo and Molnar 2020). Even if the process responsibility is still held within the public sector, the AI building blocks are often very sophisticated and opaque that public sector employees may not be able to meaningfully audit suggestions from the AI. It has been observed that transparency and a critical audience is key to algorithmic accountability (Kemper and Kolkman 2019). Given this scenario, the usage of AI thus implicitly leads to a form of privatization of public sector decisionmaking, where not just the ownership, but the agency of decision-making is outsourced to privately developed and owned AI. As observed in (Mittelstadt et al. 2016), algorithms are inescapably value-laden; this leads to the current trends in AI uptake providing a channel for corporate values to leech into public sector decision making. Decision making within electoral processes making movements within the spectrum of values should be regarded as being of signi\ufb01cant concern. Overall, we observe that there are challenges, risks and concerns at various levels when it comes to AI usage within core electoral processes. This points to the need of more debates and studies into the topic, and the importance of ensuring that regulatory boundaries are set in very careful but unambiguous ways." + }, + { + "url": "http://arxiv.org/abs/2106.13271v1", + "title": "On Fairness and Interpretability", + "abstract": "Ethical AI spans a gamut of considerations. Among these, the most popular\nones, fairness and interpretability, have remained largely distinct in\ntechnical pursuits. We discuss and elucidate the differences between fairness\nand interpretability across a variety of dimensions. Further, we develop two\nprinciples-based frameworks towards developing ethical AI for the future that\nembrace aspects of both fairness and interpretability. First, interpretability\nfor fairness proposes instantiating interpretability within the realm of\nfairness to develop a new breed of ethical AI. Second, fairness and\ninterpretability initiates deliberations on bringing the best aspects of both\ntogether. We hope that these two frameworks will contribute to intensifying\nscholarly discussions on new frontiers of ethical AI that brings together\nfairness and interpretability.", + "authors": "Deepak P, Sanil V, Joemon M. Jose", + "published": "2021-06-24", + "updated": "2021-06-24", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY" + ], + "main_content": "Introduction AI algorithms are being increasingly used for decision making within scenarios with social and political consequences (e.g., bene\ufb01t eligibility, recidivism) as opposed to traditional automation scenarios (e.g., handwriting recognition). This has rightly spurred recent interest in Ethical AI. The broad umbrella of ethical AI or responsible AI [Dignum, 2019] involves considerations such as fairness [Chouldechova and Roth, 2020], interpretability1 [Molnar, 2020], privacy [Mohassel and Zhang, 2017] and trustworthiness [Toreini et al., 2020]. Of these, fairness and interpretability are arguably the two considerations that have been explored quite heavily. Certain pairs of considerations, such as interpretability and trustworthiness, may be seen as apparently synergistic. There is much less understanding of how technological efforts across other pairs of considerations, such as fairness and interpretability as we consider here, can work together. Our Contributions: In this position paper, we \ufb01rst discuss the conceptual distinctions between fairness and interpretability as it applies to AI/ML. Next, we propose two fron1Interpretability, as we use in this paper, is quite related to, and often interchangeable with the notion of explainability. tiers of AI research in which efforts towards fairness and interpretability can be meaningfully blended towards advancing ethical AI in unique and novel ways. Where appropriate, we will use example scenarios from unsupervised data-driven AI to illustrate the arguments. This is motivated partly by the fact that unsupervised learning is relatively less explored within ethical AI, making it malleable to fresh thought leads. 2 Fairness & Interpretability: Distinctions Fairness and Interpretability have largely been, within scholarly literature, seen as two distinct and different frontiers under the broader Ethical AI umbrella. The distinctions may be characterized under various dimensions as we discuss below. First, the family of fairness principles [Narayanan, 2018] are normative values that relate to the politics of AI-driven decision making [Wong, 2019]. On the other hand, interpretability considers user\u2019s ability to understand decisions, and lies at the interface between humans and AI. In other words, technological efforts towards deepening fairness would bene\ufb01t from better grounding on political philosophy, whereas interpretability would have an analogous relationship with behavioral sciences. In fact, much interpretable and explainable AI work has appeared within HCI-related computing avenues. Second, there is a tension between fairness and accuracy (or any other utilitarian metric, say, ef\ufb01ciency), and similarly, there is a tension between interpretability and accuracy. However, these tensions are in different directions, as recently outlined in [Kleinberg and Mullainathan, 2019]. The authors illustrate that a simple and interpretable model can be strictly improved on both fairness and accuracy by making it more complex (thus reducing interpretability). In other words, there exists a tension between fairness and interpretability. Third, a system that produces interpretable results enhances user\u2019s trust in the AI in ways that a fair AI may not (at least, in the short-term). This means that interpretability is arguably likely to be more \u2019popular\u2019, and thus would be prioritized over fairness by the private sector which is where most AI is developed. Fourth, interpretability can be assessed at the level of individual decisions made by the AI, whereas fairness assessments need to go much beyond analyzing individual decisions. Fairness assessment often involves a normative evaluation of the process and/or the distribution of decisions made. Fifth, there is a distinct contrast between the subjectivity of fairness and interpretability. Fair\fness is subjective at a normative level, and the subjectivity is often grounded in political positions; supporters of individual fairness are likely to be to the right of group fairness, within the left-right political spectrum. On the other hand, interpretability is often highly subjective in politically neutral ways; the same result or explanation may be regarded as less interpretable by one user, whereas it could be rated as more interpretable by another. This implies that interpretability is amenable to personalization (potentially through data-driven A/B testing, when user feedback is available) at a much more extensive level than fairness is. Lastly, it may be argued that different application domains of AI have different priorities between interpretability and fairness. Interpretability may be considered very important in \ufb01elds involving high-bandwidth human-technology interaction such as robotics and HCI. On the other hand, fairness may be of prime importance in societally relevant applications such as automation of applicant screening for jobs, policing and automated decision making on bene\ufb01t and healthcare eligibility. The distinctions discussed above do not just apply to fairness vs. interpretability. While other theories from ethical philosophy share many features with fairness as discussed above, dimensions such as privacy, trustworthiness and transparency share several characteristics with interpretability. 3 Frontiers of Synergy We develop two frontiers of synergy for technical efforts towards interpretability and fairness. Speci\ufb01cally, the directions we outline require that technological building blocks situated within either of the two (interpretability and fairness) work together to achieve meaningful advances within ethical AI. We neither target nor accomplish a conceptual uni\ufb01cation of the concepts of interpretability and fairness. \u2019Justice must not only be done, but must be seen to be done\u2019 Chief Justice Hewart (UK High Court, 1924) 4 Interpretability for Fairness Motivation: Consider a decision that is output by a system that is purportedly \u2019fair\u2019. How can somebody at the receiving end of a decision from such a system be sure that it is fair, according to the notion of fairness used in the system? This question may be instantiated based on the speci\ufb01c notion of fairness used, as follows. For a system that claims to use Rawlsian fairness, how can we ascertain that the present outcome is reasonable to ensure that the system functions in a way that is most advantageous to the least diadvantaged (Ref. difference principle [Rawls, 1971]). Or in the case of demographic parity as fairness, how can one be sure that the decision is a natural fallout of a process that is designed to achieve parity across sensitive groups. Particularly, users are likely to ask these questions when they \ufb01nd themselves at the receiving end of what they perceive as a bad decision (e.g., denial of welfare support). In the best interests of explainability and in making sure that fairness is perceived to be ensured (alluding to the 1924 quote above), we (i) ought to answer these \u2019how is this fair?\u2019 questions, and (ii) do so without reference to other decisions for other speci\ufb01c individuals (for privacy). Background: Having motivated that some sort of fairness explanations are necessary, we now consider ethical theories in the space. Accountability for Reasonableness (AFR) [Daniels and Sabin, 2008] is an ethical framework designed for healthcare scenarios, in particular, when fairness is to be accounted for in scenarios involving allocation of scarce healthcare resources. Of particular interest to us is AFR\u2019s relevance condition that suggests that decisions are explained by appealing to rationales that are reasonable enough to be accepted by fair-minded people who are disposed to \ufb01nding justi\ufb01able terms of co-operation. Badano [Badano, 2018] generalizes this to require acceptance by each reasonable person (aka full acceptability), which implicitly requires that those subject to most adverse decisions also be convinced. Recently, Wong [Wong, 2019] has argued that AFR could provide directions towards addressing the political dimensions of algorithmic fairness. Interpretability for Fairness: We propose a novel framework, Interpretability for Fairness (IFF), drawing inspiration from AFR. IFF blends AFR with the design for values (Ref. [Dignum, 2019] Sec 4.4) approach to formulate a set of principles targeted at using interpretability as a pathway to enhance acceptability of fair AI. The two IFF principles are: \u2022 Fairness Publicity Condition: The fairness value(s) that are sought to be achieved by the AI system must be laid out clearly in layman-friendly language as comprehensively as possible. If a trade-off between values is intended (as often sought, such as a balance between utilitarianism and demographic parity fairness), the relative importance between the values in the mix should be exempli\ufb01ed. \u2022 Values to Decision Interpretability Condition: The system should strive to produce a layman-friendly and simple interpretation of each decision substantiating how it relates to the mix of values embodied in the system, as outlined in the publicity condition. This interpretation should be acceptable to any reasonable person who is disposed to \ufb01nding mutually justi\ufb01able terms of co-operation. These principles are intended to be meaningful to a data scientist equipped with a reasonable understanding of the nuances and social aspects of the domain in which the AI is being designed to operate in, as opposed to abstract ones that pose a \u2019translation\u2019 challenge. In particular, IFF is at a lower level of abstraction due to instantiating accountability as interpretability and reasonableness as fairness-oriented reasonableness. IFF also keeps process governance aspects (e.g., appeals as in AFR) out of its scope and is focused on the technical design of the AI. While the \ufb01rst IFF condition draws from the analogous condition in AFR, the second condition is inspired by the \u2019design for values\u2019 [Aldewereld et al., 2015] maxim on linking values to concrete software functionalities. The second condition requires fairness interpretability, or explaining a decision based on the fairness values and any trade-offs with non-fairness values used in the system. We note here that the layman-friendliness requirement in the \ufb01rst condition entails exclusion of socio-technical terms such as structural discrimination [Pincus, 1999] and intersectionality [Carastathis, 2014], whereas that in the second condition would require exclusion of AI-terminology such as bayesian \finference [Ghosal and Van der Vaart, 2017]. IFF and Current Solutions: IFF requires interpretability that is distinctly different from interpretability in the sense it is used in current Fair ML literature. Within our focus area of unsupervised learning, interpretability is dominated by rule-learning where features for rules are drawn from the data [Balachandran et al., 2012] or auxiliary features [Sambaturu et al., 2020] such as tags. While these enable human-understandable descriptions of the outcomes, they do not satisfy the IFF requirement of explaining the fairness. Recent work on fair representation learning [He et al., 2020] has posited that fairness-based reengineered versions of original features may be considered interpretable as long as they remain attached to the semantic labels (e.g., maths marks, annual income) associated with features. However, IFF requires that the re-engineering process be interpretable on the basis of fairness values; for example, if the re-engineering transforms annual income for an individual from $40k to $35k prior to further downstream processing, IFF requires that this transformation be explained as a fallout of a reasonable fairness-seeking process. We haven\u2019t come across Fair ML work that may be argued to be conformant to IFF. IFF-aligned AI: IFF is construed, much like AFR, as a set of guiding principles, and is not prescriptive as to how conformance may be achieved. By way of an illustrative example, a feature re-engineering method that corrects the dimensions of achievement of socially discriminated demographics (e.g., backward castes) upward may be acceptable by reasonable people as a fallout of a process targeting demographic parity, as long as the re-engineering process is interpretable. This is similar in spirit to differentiated age and attempt limits as enforced in India\u2019s af\ufb01rmative action process2. Why IFF? As indicated in the motivation, IFF has the potential to deepen user con\ufb01dence and trust in fairness-seeking algorithms through explicitly illustrating the conformance to fairness. A detailed treatment of the various aspects of the two IFF principles is not feasible due to space constraints. 5 Fairness and Interpretations Motivation: Consider using fair AI along with state-of-theart approaches for interpretability that explain the outcomes using data or auxiliary information (not the IFF fairness interpretability). The absence of a connection between the two may lead to a dissonance between their outputs, especially for fair AI that operationalizes notions of group fairness. The fairness-agnostic search for user-friendly explanations could lead to manifestly unimpressive explanations. For example, the membership of an individual within a group could be motivated by demographic fairness considerations, but an explanation referring to a sensitive attribute such as ethnicity may be unacceptable, since that may be perceived as demeaning individuality. On the other hand, excluding sensitive attributes from the realm of explanations may lead to lower quality (thus, unacceptable) explanations. Either of the above could undermine user\u2019s trust in the AI, given recent research linking trust and explanation quality [Kunkel et al., 2019]. 2e.g., https://en.wikipedia.org/wiki/Civil Services Examination (India)#Age Fairness and Interpretations: Towards addressing the above conundrum, we propose a layered paradigm, called Fairness and Interpretations (F&I). First, we propose that an AI system be constrained to conform to both: (i) fairness, in accordance with the fairness values it targets, and (ii) reasonable interpretability, i.e., be able to provide reasonable explanations for its decisions. This rules out fair AI that is not reasonably interpretable, and vice versa. There could be several ways of characterizing reasonableness in explanations. One way would be to characterize reasonableness as individual fairness; in other words, the space of reasonable explanations may be characterized as being a space where the associated outcomes are smooth. Others ways could be to choose explanation paradigms (of which there are many [Binns et al., 2018]) that enhance user\u2019s perception of system fairness (an empirical study appears at [Dodge et al., 2019]). Second, for scenarios where both fairness and reasonable interpretability are hard to satisfy together (as could be the case where the data has high degrees of bias), we propose that the user be informed that no interpretable explanation can be supplied for the decision, and that the result be accompanied by fairness explanations as outlined in IFF. Thus, an F&I-compliant AI is one that adheres to fairness (as designed for), and provides either reasonable decision interpretability or only fairness explanations. Additionally, the two F&I principles are ordered lexically; unless there is a good reason that the \ufb01rst principle cannot be satis\ufb01ed, the second does not come into play. The appeals process for such a system needs to be cognizant of whether the decision was accompanied by reasonable explanations or just fairness explanations; a higher appeal rate for the latter may be expected and planned for. Why F&I? F&I is intended as a paradigm that will meaningfully bring both fairness and interpretability together without artifacts of one dampening the other. While a detailed analysis of F&I consequences is infeasible here due to space, F&I-aligned AI is expected to be able to fuse normative and user-oriented aspects towards enhancing ethical AI. F&I is well-aligned with and intends to further operationalization of the right to explanations enshrined within GDPR [Selbst and Powles, 2018] recommendations of the European Union. Technical Challenges: In contrast to IFF, F&I lies in a technically pristine space and entails crisp technical challenges. First, the notion of reasonable explanations needs to be technically instantiated, and computational approaches to determine reasonableness effectively and ef\ufb01ciently needs to be developed. Second, the twin constraints (fairness and reasonable explanations) would need to be achieved together, requiring novel multi-criteria optimization methods. Third, a decision procedure to determine when to fall back to IFF explanations needs to be developed. While these may sound simple to state, domain-speci\ufb01c nuances would entail different domain-speci\ufb01c technical pathways for achieving F&I. 6 Concluding Notes We considered the distinctions between fairness and interpretability, and outlined two principles-based frameworks \fthat entail technical challenges where fairness and interpretability can meaningfully work together. We hope that these will contribute to deepening the scholarly debate towards enhancing ethical AI in meaningful ways." + }, + { + "url": "http://arxiv.org/abs/2106.06049v1", + "title": "FiSH: Fair Spatial Hotspots", + "abstract": "Pervasiveness of tracking devices and enhanced availability of spatially\nlocated data has deepened interest in using them for various policy\ninterventions, through computational data analysis tasks such as spatial hot\nspot detection. In this paper, we consider, for the first time to our best\nknowledge, fairness in detecting spatial hot spots. We motivate the need for\nensuring fairness through statistical parity over the collective population\ncovered across chosen hot spots. We then characterize the task of identifying a\ndiverse set of solutions in the noteworthiness-fairness trade-off spectrum, to\nempower the user to choose a trade-off justified by the policy domain. Being a\nnovel task formulation, we also develop a suite of evaluation metrics for fair\nhot spots, motivated by the need to evaluate pertinent aspects of the task. We\nillustrate the computational infeasibility of identifying fair hot spots using\nnaive and/or direct approaches and devise a method, codenamed {\\it FiSH}, for\nefficiently identifying high-quality, fair and diverse sets of spatial hot\nspots. FiSH traverses the tree-structured search space using heuristics that\nguide it towards identifying effective and fair sets of spatial hot spots.\nThrough an extensive empirical analysis over a real-world dataset from the\ndomain of human development, we illustrate that FiSH generates high-quality\nsolutions at fast response times.", + "authors": "Deepak P, Sowmya S Sundaram", + "published": "2021-06-01", + "updated": "2021-06-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "INTRODUCTION With sensing and tracking devices such as mobile phones and IoT becoming pervasive in this web-driven era, there is an abundance of spatial data across real-world settings. Within such spatial datasets, it is often of interest to identify geographically localized groups of entities that are of sufficient size and express a distinctive character so strongly that it is unlikely to have occurred by chance. To illustrate an example from our times, COVID-19 contact tracing apps accumulate large amounts of spatial data of people, of which some are known to have a COVID-19 infection. It would be of interest to automatically identify localized regions of high COVID-19 incidence, referred to as hot spots in contemporary reporting1, so that the information could be channelized to health experts to identify causal reasons, or to public policy experts to develop a mitigation strategy for those regions. While COVID-19 hot spots are characterized by high disease incidence rates, other web and new age data scenarios may call for different formulations of hot spot character, viz., high crime rates (law enforcement), intense poverty (development studies), high mobile data usage (mobile network optimization) and so on. For example, Figure 1 illustrates hot spots of educational underachievement in India as identified from a human development dataset. In each case, identifying a set of hot spots would be of use so they could be subjected to an appropriate policy action. The unsupervised learning task of detecting spatial hot spots was pioneered by the spatial scan statistic (SSS) [21]. The spatial scan statistic and its variants within the SaTScan2 toolkit have remained extremely 1https://www.nbcnews.com/news/us-news/map-track-summer-2020-coronavirushotspots-united-states-n1231332 2https://www.satscan.org/ Figure 1: An illustration of hot spots of Low Educational Achievement in India popular for detecting spatial hot spots over the past two decades. While health and communicable diseases form the most popular application area of SSS (e.g., [30]), they have been used within domains as diverse as archaeology [35] and urban planning [16]. 1.1 Fairness in Hot Spots In scenarios where spatial hot spots are to be used to inform government and public sector action, especially in sensitive policy domains (e.g., law enforcement [25], development), it is often important to ensure that the collective population subject to the policy action is diverse in terms of protected attributes such as ethnicity, caste, religion, nationality or language, among others. Consider hot spot detection on a crime database to inform policy action that could include sanctioning higher levels of police patrols for those regions. This would likely lead to higher levels of stop and frisk checks in the identified hot spots, and would translate to heightened inconvenience to the population in the region. Against this backdrop, consider a sensitive attribute such as ethnicity. If the distribution of those who land up in crime hot spots is skewed towards particular ethnicities, say minorities as often happens [23], it directly entails that they are subject to much more inconvenience than others. These, and analogous scenarios in various other sectors, provide a normatively compelling case to ensure that the inconvenience load stemming from crime hot spot detection (and other downstream processing) be proportionally distributed across ethnicities. The same kind of reasoning holds for groups defined over other sensitive attributes such as religion and nationality. It is also notable that ethnically skewed hot spot detection and patrolling would exacerbate the bias in future data. Minor crimes are recorded arXiv:2106.06049v1 [cs.LG] 1 Jun 2021 \fin the data only when committed as well as observed. Thus, majority and minority areas with similar real crime prevalance, alongside minority-oriented patrolling, would lead to data that records higher crime prevalance in the latter. Second, even in cases where the intended policy action is positive (e.g., setting up job support centres for unemployment hot spots), the policy being perceived as aligned to particular ethnicities could risk social solidarity and open avenues for populist backlash [15], which could ultimately jeopardize the policy action itself. While considerations as above are most felt in policy domains such as policing and human development, these find expression in hot spot based prioritization in provisioning any common good. Ensuring fair distribution of the impact of any policy action, across sensitive attributes such as ethnicities, is aligned with the theory of luck egalitarianism [18], one that suggests distributive shares (of inconvenience or benefits) be not influenced by arbitrary factors, especially those of \u2018brute luck\u2019 that manifest as membership in sensitive attribute groups such as ethnicity, religion and gender (since individuals do not choose those memberships are are often just born into one). Such notions have been interpreted as a need for orthogonality between groups in the output and groups defined on sensitive attributes, and has been embedded into machine learning algorithms through the formulation of statistical parity (e.g., [1]). In summary, there is an compelling case, as in the case of other machine learning tasks, for hot spot detection to be tailored in a way that the population covered across the chosen hot spots be diverse along protected demographic groups such as ethnicity, gender religion, caste and similar. 1.2 Our Contributions We now outline our contributions in this paper. First, we characterize the novel task of detection of fair spatial hot spots, for the first time. In particular, we outline a task formulation for enumerating a diverse sample of trade-off points in the noteworthiness-fairness spectrum, to suit diverse scenarios that require different trade-off points between noteworthiness and fairness. We note that straightforward solutions for the task would be computationally infeasible for even moderate dataset sizes. Second, we develop a method, FiSH, short for Fair Spatial Hot Spots, for efficiently enumerating sets of hot spots along the quality-fairness trade-off. FiSH works as a layer over any chosen fairness-agnostic spatial hot spot detection method, making it available across diverse scenarios and existing methodologies for those scenarios. Third, we outline a suite of evaluation measures to assess the quality and fairness of results. Lastly, we perform an extensive empirical evaluation over realworld datasets which illustrates the effectiveness and efficiency of FiSH in identifying diverse and fair hot spots. 2 RELATED WORK Given that fairness in spatial hot spots is a novel problem, we consider related work across two streams. We start by considering work on identifying outliers and spatial hot spots. These tasks are starkly different in terms of how the results are characterized. Outliers are determined based on neighborhood density, whereas hot spots are determined based on hotness on a chosen attribute (e.g., diseased, poor etc.). In particular, the notion of a hotness attribute is absent in the formulation for outlier detection making them fundamentally different tasks. Despite being non-identical tasks, there are similarities in the overall spirit of the tasks, which makes outlier identification relevant to the interested reader. We start with a discussion on methods for the tasks of outlier detection and spatial hot spots, and then move on to work on fairness in machine learning as applied to tasks allied to ours. 2.1 Outlier Identification There have been a variety of problem settings that seek to identify objects that are distinct from either their surroundings or the broader dataset. The more popular formulations use the former notion, that of measuring contrast from the surroundings of the data object, i.e., making use of neighborhood density. LOF [5] (and improvements [20]) consider identifying individual objects, aka outliers, which are interesting due to their (relatively sparser) spatial neighborhoods. It is noteworthy that these make object-level decisions informed purely by spatial attributes (without reference to non-spatial attributes like diseased/non-diseased, as required for COVID-19 hot spot determination). SLOM [6] extends the objectlevel local neighborhood-based decision making framework to incorporate information from non-spatial attributes. Among outlier detection methods that assess the contrast of individual data objects with the dataset as a whole, the popular paradigm is to build a dataset level statistical model, followed by assessing the conformance of individual objects to the model; those that are less conformant would be regarded as outliers. Such statistical models could be a clustering [38], dirichlet mixture [13], or more recently, auto-encoders [7, 22]. 2.2 Spatial Hot Spots Spatial Scan Statistics (SSS), pioneered by Kulldorff [21], are methods that identify localized regions that encompass multiple objects (in contrast to making decisions on individual objects, as in LOF) which collectively differ from overall dataset on chosen non-spatial hotness attributes (e.g. diseased, poor etc.). The focus is on characterizing regions which may be interpreted as hot spots due to the divergence of their character from the overall dataset. This makes SSS a markedly different task from outlier identification in specification, input data requirements, internal workings and output format. SSS spatial hot spots are vetted using a statistical likelihood ratio test to ascertain significant divergence from global character. This makes SSS as well as its various variants, as implemented within SaTScan, a statistically principled family of methods to detect spatial hot spots. While Kulldorff\u2019s initial proposal is designed to identify circular hot spots, the framework has been generalized to identify arbitrary shapes in several ways; ULS [29] is a notable work along that direction. Other methods such as bump hunting [14] and LHA [33] address related problems and leverage data mining methods. Despite an array of diverse research in identifying spatial hot spots, SSS methods have remained extremely popular. Just since 2020, there have been 1000+ papers3 that make use of SSS and other scan statistics within SaTScan. Our technique, FiSH, can work alongside any method that can provide an ordered output of hot spots, such as SaTScan methodologies. 2.3 Fair Unsupervised Learning While most attention within the flourishing field of fairness in machine learning [9] has focused on supervised learning tasks, there has been some recent interest in fairness for unsupervised learning tasks [28]. Among the two streams of fairness explored in ML, viz., individual and group fairness (refer [3] for a critical comparative 3https://scholar.google.com/scholar?as_ylo=2020&q=satscan&hl=en&as_sdt=0,5 2 \fanalysis), most work on fair unsupervised learning has focused on group fairness. Group fairness targets to ensure that the outcomes of the analytics task (e.g., clusters, top-\ud835\udc58results etc.) embody a fair distribution of groups defined on protected attributes such as gender, ethnicity, language, religion, nationality or others. As alluded to earlier, the most common instantiation of group fairness has been through the computational notion of statistical parity, initially introduced within the context of classification [12]. Group fair unsupervised learning work includes those on fair clustering (e.g., [8]), retrieval (e.g., [39]) and representation learning (e.g., [26]). While there has been no work on fair spatial hot spots yet, there has been some recent work on fairness in outlier detection which we discuss below. Fair Outliers: There has been some recent work on fair outlier detection. We start by outlining the core differences between outlier detection to illustrate why fairness enhancements targeted at outlier detection would not be applicable for spatial hot spots. First, outlier detection involves object-level decision making, whereas hot spots are determined at the level of object groups. Second, they do not make use of any non-spatial hotness attribute (e.g., diseased, poor etc.) to determine outlierness, whereas a key nonspatial attribute is used to characterize hot spots. The second difference makes algorithms for outlier detection contrast highly from those for identifying spatial hot spots. Among recent fair outlier detection papers, [10] develops a human-in-the-loop method for fair outlier detection, whereas [11] focuses on automated group fair outlier detection, developing FairLOF, a technique that extends LOF (discussed above) for fairness. FairLOF adapts LOF to incorporate adjustments based on protected attribute memberships of the object in question and its neighbors, to ensure that protected groups are fairly represented among outliers. It may be noted that the protected attributes are used exclusively to embed fairness, and not to characterize outlierness. There is a third paper, [31] which makes an intriguing proposition of achieving group fairness (on protected attributes) while being unaware of protected attributes at decision time. To our best knowledge, there has been no prior work on fairness in detecting spatial hot spots or anomalous object groups of other kinds. 3 PROBLEM STATEMENT Consider a dataset D = {. . . , \ud835\udc37, . . .}. Each object \ud835\udc37is associated with a set of spatial attributes such as (\ud835\udc65,\ud835\udc66) for a 2D space, or (\ud835\udc59\ud835\udc4e\ud835\udc61,\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc54) for locations of people. Further, each \ud835\udc37is associated with a non-spatial hotness attribute \ud835\udc63\u2208{0, 1} such as diseased (for epidemiology) or poor (for human development), which is used to determine spatial hot spots. \ud835\udc37is also associated with protected attributes (e.g., ethnicity, religion) as we will see momentarily. Consider a method for detecting spatial hot spots, such as spatial scan statistics, that is capable of providing a ranked list of top spatial hotspots, as S = [\ud835\udc461,\ud835\udc462, . . . ,\ud835\udc46\ud835\udc5a]. Each \ud835\udc46\ud835\udc56is associated with a spatial region \ud835\udc45\ud835\udc56(circular/spherical in the case of the basic SSS) such that the data objects (from D) that fall within \ud835\udc45\ud835\udc56have a significantly different hotness profile than the overall dataset. For example, the population within \ud835\udc45\ud835\udc56may have a significant high (or low) incidence rate of poverty as compared to the whole population. Noteworthiness of spatial hot spots, analyzed statistically (as in SSS), is directly related to both the size of the population within the hot spot and the extent of divergence on the hotness attribute. S is the list of spatial hot spots ordered in decreasing statistical noteworthiness; thus \ud835\udc46\ud835\udc56is more noteworthy than \ud835\udc46\ud835\udc56+1. When \ud835\udc58 (typically, \ud835\udc58<< \ud835\udc5a) noteworthy spatial hot spots are to be chosen to action upon without consideration to fairness, the most noteworthy \ud835\udc58hot spots, i.e., S\ud835\udc61\ud835\udc5c\ud835\udc5d\ud835\udc58= [\ud835\udc461, . . . ,\ud835\udc46\ud835\udc58], would be a natural choice. 3.1 Fair Spatial Hot Spots The task of fair spatial hot spots detection is to ensure that the \ud835\udc58 hot spots chosen for policy action, in addition to noteworthiness considerations as above, together encompass a diverse population when profiled along protected attributes such as ethnicity, religion, nationality etc, as motivated earlier. In other words, each demographic group is to be accorded a fair share within the collective population across the chosen hot spots. As mentioned earlier, this notion of statistical parity has been widely used as the natural measure of fairness in unsupervised learning [2, 8, 11]. When the protected attributes are chosen as those that an individual has no opportunity to actively decide for herself (observe that this is the case with ethnicity, gender as well as religion and nationality to lesser extents), statistical parity aligns particularly well with the philosophy of luck egalitarianism [19], as noted in Section 1.1. We will use S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58to denote a set of \ud835\udc58hot spots (from S) that are selected in a fairness-conscious way. It is desired that S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58 fares well on both the following measures: \ud835\udc41(S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58) = \u2211\ufe01 \ud835\udc46\u2208S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58 \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58S(\ud835\udc46) (1) \ud835\udc39(S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58) = \u2211\ufe01 \ud835\udc43\u2208P \ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc43(D, \u222a\ud835\udc46\u2208S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58\ud835\udc43\ud835\udc5c\ud835\udc5d(\ud835\udc46)) (2) The first, \ud835\udc41(.), relates with noteworthiness and is simply the sum of the ranks (ranks within S) of the chosen spatial hot spots. Lower values of \ud835\udc41(.) are desirable, and S\ud835\udc61\ud835\udc5c\ud835\udc5d\ud835\udc58scores best on \ud835\udc41(.), due to comprising the top-\ud835\udc58(so, \ud835\udc41(S\ud835\udc61\ud835\udc5c\ud835\udc5d\ud835\udc58) = \u00cd\ud835\udc58 \ud835\udc56=1 \ud835\udc56= \ud835\udc58\u00d7(\ud835\udc58+1) 2 ). The second, \ud835\udc39(.), is a fairness measure, which requires that the population covered across the hot spots within S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58be minimally divergent to the overall population, when measured on a pre-specified set of protected attributes P (e.g., ethnicity, gender); \ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc43(., .) measures divergence on attribute \ud835\udc43\u2208P. The divergence may be computed using a suitable distance measure; we will use Wasserstein distance [34]. As in the case for \ud835\udc41(.), lower values of \ud835\udc39(.) are desirable too. Though lower, and not higher, values of \ud835\udc41(.) and \ud835\udc39(.) indicate deeper noteworthiness and fairness, we refer to these measures as noteworthiness and fairness to avoid introducing new terminology. 3.2 Diverse Selection of S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58Candidates The noteworthiness and fairness considerations are expected to be in tension (an instance of the fairness-accuracy tension [24]), since fairness is not expected to come for free (as argued extensively in [17]). One can envision a range of possibilities for S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58, each of which choose a different point in the trade-off between \ud835\udc41(.) and \ud835\udc39(.). At one end is the S\ud835\udc61\ud835\udc5c\ud835\udc5d\ud835\udc58(best \ud835\udc41(.), likely worst \ud835\udc39(.)), and the other end is a maximally fair configuration that may include extremely lowranked hot spots from S. These would form the pareto frontier4 when all the \ud835\udc5a\ud835\udc36\ud835\udc58(\ud835\udc58sized) subsets of S are visualized as points in the 2D noteworthiness-fairness space, as illustrated in Figure 2. Each point in the pareto frontier (often called skyline [4]) is said to be pareto efficient or pareto optimal since there is no realizable point which is strictly better than it on both N and F measures. In other words, S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58candidates that are not part of the pareto frontier 4https://en.wikipedia.org/wiki/Pareto_efficiency#Pareto_frontier 3 \fFigure 2: Illustration of the N-F space with \ud835\udc58-sized subsets of S. The pareto frontier is marked with a dotted line. The circled points indicates a possible solution to the approximate \ud835\udf0f-dpe problem (\ud835\udf0f= 4). The exact \ud835\udf0f-dpe would comprise equally spaced points from the pareto frontier. can be safely excluded from consideration, since there would be a pareto frontier candidate that is strictly better than it on both noteworthiness and fairness. Each policy domain may choose a different point in the trade-off offered across candidates in the pareto frontier, after due consideration of several available trade-off points. For example, policing may require a high-degree of fairness, whereas epidemiology interventions may be able to justify policy actions on less diverse populations based on the extent of supporting medical evidence. The pareto frontier may be large (could contain hundreds of candidates, theoretically bounded above only by O(\ud835\udc5a\ud835\udc36\ud835\udc58)) for a human user to fully peruse. Thus, an obvious recourse would be to identify \ud835\udf0fdiverse pareto efficient candidates (henceforth, \ud835\udf0f-dpe), where \ud835\udf0fis a pre-specified parameter, so the human user may be able to choose appropriately from a varied set of possibilities. A natural and simple but incredibly inefficient solution would be to (i) enumerate the entire pareto frontier, (ii) trace the sequence of pareto efficient points from the top-left to the bottom-right (i.e., the dotted line), (iii) split the sequence into \ud835\udf0f\u22121 equally sized segments, and (iv) take the \ud835\udf0fsegment end points as the result. To summarize, the diverse candidate selection task outlined as \ud835\udf0f-dpe requires a diverse set of pareto efficient candidates in the N-F space, each candidate representing a \ud835\udc58sized subset of S. 3.3 Approximate \ud835\udf0f-dpe It may be observed that it is infeasible to enumerate the \ud835\udc5a\ud835\udc36\ud835\udc58subsets (e.g., 40\ud835\udc3610 = 8.5\ud835\udc38+8) in the N-F space just due to the profusion of possibilities, making exact \ud835\udf0f-dpe identification (as outlined in the four-step process in the previous section) infeasible for practical scenarios. This makes the task of identifying a close approximation of \ud835\udf0f-dpe results efficiently a natural alternative for a policy expert to examine the trade-off points and arrive at a reasonable choice of S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58to subject to policy action. This brings us to the approximate \ud835\udf0f-dpe task, which is that of efficiently identifying a close approximation of the exact \ud835\udf0f-dpe result. The set of circled Notation What it stands for S the ordered list of spatial hotspots used as starting point for \ud835\udf0f-dpe task S\ud835\udc61\ud835\udc5c\ud835\udc5d\ud835\udc58 the subset of \ud835\udc58most noteworthy hotspots from S S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58 \ud835\udc58-sized subset of S, a candidate for fair selection of hot spots \ud835\udc41(S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58) sum of ranks of the spatial hot spots within S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58; lower denotes better noteworthiness \ud835\udc39(S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58) deviation of S\ud835\udc53\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc58\u2019s population from dataset on protected attributes; lower denotes better fairness \ud835\udc5a cardinality of S \ud835\udc58 # hotspots from S desired in each output candidate \ud835\udf0f Number of candidates desired in output \ud835\udc4f beam width parameter used by FiSH (Sec 4) Table 1: Table of Notations for Easy Reference points in Figure 2 illustrates a possible solution to the approximate \ud835\udf0f-dpe task. All pertinent notations are outlined in Table 1 for easy reference. Our method, FiSH, that addresses the approximate \ud835\udf0f-dpe task, is detailed below. 4 FISH: FAIR SPATIAL HOT SPOTS FiSH is an efficient heuristic-driven technique addressing the approximate \ud835\udf0f-dpe task outlined above. We first describe a systematic organization of the search space, followed by a heuristic method that traverses the space prioritizing the search using three considerations: pareto efficiency, diversity and efficient search. 4.1 Search Space Organization Recall that we start with a noteworthiness-ordered list of spatial hot spots S = [\ud835\udc461, . . . ,\ud835\udc46\ud835\udc5a]. Our full search space comprises the \ud835\udc5a\ud835\udc36\ud835\udc58distinct \ud835\udc58-sized subsets of S. We use the lexical ordering in S to organize these candidates as leaves of a tree structure, as 4 \fFigure 3: FiSH\u2019s Search Tree: Nodes at level \ud835\udc58represent \ud835\udc58sized subsets of S, and form points in the N-F space (Fig 2). shown in Figure 3. Each node in the tree is labelled with an element from S, and no node in the FiSH search tree has a child that is lexically prior to itself. Such a hierarchical organization is popular for string matching tasks, where they are called prefix trees [37]. In devising FiSH, we draw inspiration from using prefix structures for skyline search over databases [27]. Each internal node at level \ud835\udc59(root level = 0) represents a \ud835\udc59-sized subset of S comprising the \ud835\udc59 nodes indicated in the path from root to itself. The lexical ordering ensures that each subset of S has a unique position in the tree, one arrived at by following branches corresponding to nodes in the subset according to the lexical ordering. The \ud835\udc5a\ud835\udc36\ud835\udc58candidates would be the nodes at level \ud835\udc58. It is infeasible to enumerate them fully, as observed earlier. Thus, FiSH adopts a heuristic search strategy to traverse the tree selectively to follow paths leading to a good solution (i.e., set of \ud835\udf0fnodes at level \ud835\udc58) for the approximate \ud835\udf0f-dpe task. 4.2 FiSH Search Strategy The exact \ud835\udf0f-dpe result set is characterized by pareto efficiency and diversity, when applied over the \ud835\udc5a\ud835\udc36\ud835\udc58candidates. The FiSH search strategy uses precisely these criteria as heuristics to traverse the search tree efficiently from the root downward. The core idea behind this search strategy is our conjecture that pareto efficiency and diversity at a given level in the FiSH search tree would be predictive of pareto efficiency and diversity at the next level. We operationalize this heuristic strategy using beam search, a classical memory-optimized search meta-heuristic [32] that has received much recent attention [36]. FiSH starts its search from the root node, expanding to the firstlevel child nodes, each of which represent singleton-sets denoting the choice of a particular spatial hot spot from S. This forms the candidate set at level 1 of the FiSH tree, C1 = {{\ud835\udc461}, {\ud835\udc462}, . . .}. These 1-sized subsets of S are then arranged in an N-F space as in Fig 2. Note that the N-F space of 1-sized subsets is distinct and different from the N-F space of \ud835\udc58-sized subsets (Fig. 2). The paretoefficient subset of C1 is then identified as \ud835\udc43(C1). The candidates in \ud835\udc43(C1) are then arranged in a linear sequence tracing the pareto frontier from the top-left to the bottom-right point (similar to the illustration of pareto frontier in Fig 2). This linear sequence is split into \ud835\udc4f\u22121 equally spaced segments, and the \ud835\udc4fpoints at the segment end-points are chosen as \ud835\udc37\ud835\udc4f(\ud835\udc43(C1)), a \ud835\udc4f-sized subset of C1. The Algorithm 1: FiSH Search Technique input :S organized as a search tree, \ud835\udc58, \ud835\udf0f parameters:beam width \ud835\udc4f 1 C1 = {{\ud835\udc461}, {\ud835\udc462}, . . . , } 2 for \ud835\udc56\u21901 to \ud835\udc58\u22121 do 3 \ud835\udc43(C\ud835\udc56) = pareto frontier of C1 in the N-F space 4 \ud835\udc37\ud835\udc4f(\ud835\udc43(C\ud835\udc56)) = equally spaced \ud835\udc4fcandidates from pareto frontier \ud835\udc43(C\ud835\udc56) 5 C\ud835\udc56+1 = \u00d0 \ud835\udc36\u2208\ud835\udc37\ud835\udc4f(\ud835\udc43(C\ud835\udc56)) \ud835\udc36\u210e\ud835\udc56\ud835\udc59\ud835\udc51\ud835\udc5f\ud835\udc52\ud835\udc5b(\ud835\udc36) 6 \ud835\udc43(C\ud835\udc58) = pareto frontier of C\ud835\udc58in the N-F space 7 R = equally spaced \ud835\udf0fpoints from \ud835\udc43(C\ud835\udc58); 8 Return R candidate set at the next level of the tree search process, i.e., C2, is simply the set of all children of nodes in \ud835\udc37\ud835\udc4f(\ud835\udc43(C1)) (actually, the subsets of S that they stand for). C2 = \u00d8 \ud835\udc50\u2208\ud835\udc37\ud835\udc4f(\ud835\udc43(C1)) \ud835\udc50\u210e\ud835\udc56\ud835\udc59\ud835\udc51\ud835\udc5f\ud835\udc52\ud835\udc5b(\ud835\udc50) (3) It may be noted that C2 is a small subset of the set of all 2-sized subsets of S, since only children of the \ud835\udc4fnodes selected from the previous level are selected for inclusion in C2. Next, C2 is subject to the same processing as C1 comprising: (1) identifying pareto efficient candidates \ud835\udc43(C2), (2) identifying a diverse \ud835\udc4fsized subset \ud835\udc37\ud835\udc4f(\ud835\udc43(C2)), and (3) following the children pointers, to arrive at the candidate set for the next level. This process continues up until C\ud835\udc58whereby the pareto frontier \ud835\udc43(C\ud835\udc58) is identified followed by the choice of \ud835\udf0fdiverse candidates which will eventually form FiSH\u2019s result set for the approximate \ud835\udf0f-dpe task. This search strategy is illustrated formally in Algorithm 1. The one-to-one correspondence between nodes in the search tree and subsets of S allows us to use them interchangeably in the pseudocode. 4.3 Discussion FiSH\u2019s search strategy makes use of pareto efficiency and diversity directly towards identifying a small set of nodes to visit at each level of the tree. Restricting the search to only \ud835\udc4fnodes at each level before moving to the next enables efficiency. Smaller values of \ud835\udc4f 5 \fenable more efficient traversal, but at the cost of risking missing out on nodes that could lead to more worthwhile members of the eventual result set. In other words, a high value of \ud835\udc4fallows a closer approximation of the \ud835\udf0f-dpe result, but at a slower response time. It may be suggested that \ud835\udc4fbe set to \u2265\ud835\udf0f, since the algorithm can likely afford to visit more options than a human may be able to peruse eventually in the result set. The candidate set size at any point, and thus the memory requirement, is in O(\ud835\udc4f\ud835\udc5a). The computational complexity is in O(\ud835\udc58\ud835\udc4f2\ud835\udc5a2), and is dominated by the pareto frontier identification (which is in O(\ud835\udc4f2\ud835\udc5a2)) at each level. While \ud835\udc4fis a controllable hyperparameter (likely in the range of 5-20), \ud835\udc5acan be constrained by limiting FiSH to work with the top-\ud835\udc5aresult set (as S) from the upstream spatial hot spot technique. 5 EVALUATING APPROX \ud835\udf0f-DPE RESULTS Given that (approximate) \ud835\udf0f-dpe is a new task we proposed, we now describe novel evaluation metrics to assess the quality of FiSH\u2019s results. Recall that, given the N-F space comprising all \ud835\udc58-sized subsets of S, the choice of\ud835\udf0fequally spaced skyline candidates forms the result set for the exact \ud835\udf0f-dpe task that we propose in this paper. This result set, which we call Exact, is computationally infeasible for moderate datasets, but forms our natural baseline for measuring FiSH\u2019s effectiveness. Approximate \ud835\udf0f-dpe results from FiSH may be evaluated either directly based on how well they approximate the expected results of the exact \ud835\udf0f-dpe task, or based on how well they adhere to the spirit of the \ud835\udf0f-dpe task of identifying a diverse group of pareto efficient subsets of S. We now devise evaluation measures along the lines above. In what follows, we use P to denote the \ud835\udc5a\ud835\udc36\ud835\udc58 \ud835\udc58-sized subsets of S. 5.1 Direct Comparison Let the result of the exact\ud835\udf0f-dpe task be E = [\ud835\udc381, . . . , \ud835\udc38\ud835\udf0f], and FiSH\u2019s result be F = [\ud835\udc391, . . . , \ud835\udc39\ud835\udf0f]. We would like the average distance between corresponding elements to be as low as possible. \ud835\udc37\ud835\udc36(E, F ) = 1 \ud835\udf0f \ud835\udf0f \u2211\ufe01 \ud835\udc56=1 \ud835\udc38\ud835\udc62\ud835\udc50\ud835\udc59(\ud835\udc38\ud835\udc56, \ud835\udc39\ud835\udc56) (4) where \ud835\udc38\ud835\udc62\ud835\udc50\ud835\udc59(., .) is the euclidean distance in the N-F space. Notice that when E = F , \ud835\udc37\ud835\udc36(., .) evaluates to 0.0. Given that \ud835\udc41(.) and \ud835\udc39(.) would be in different ranges, we will compute the distance after normalizing both of these to [0, 1] across the dataset. As may be obvious, smaller values, i.e., as close to 0.0 as possible, of \ud835\udc37\ud835\udc36(., .) are desirable. 5.2 Quantifying Pareto-ness: Coverage A diverse and pareto efficient set may be expected to collectively dominate most objects in the \ud835\udc41-\ud835\udc39space. Accordingly, we devise a measure, called coverage, that measures the fraction of candidates in P that are pareto dominated by at least one candidate in F . \ud835\udc36\ud835\udc5c\ud835\udc63(F ) = 1 |P| \u2211\ufe01 \ud835\udc43\u2208P I(\u2203\ud835\udc39\u2208F |\ud835\udc39\u227b\ud835\udc43) (5) where \ud835\udc39\u227b\ud835\udc43is true when \ud835\udc39pareto dominates \ud835\udc43. A point pareto dominates another if the latter is no better than the former on both attributes, excluding the case where both are identical in terms of their N-F co-ordinates. A candidate being dominated by another indicates that the latter characterizes an absolutely better trade-off point than the former (on both \ud835\udc41(.) and \ud835\udc39(.)). Thus, we would like the result set to be in a way that most, if not all, candidates are dominated by one or more candidates in the result set. \ud835\udc36\ud835\udc5c\ud835\udc63(.) is measured as a fraction of the candidates dominated, hence it is in the range [0, 1]. Full coverage (i.e., \ud835\udc36\ud835\udc5c\ud835\udc63(.) = 1.0) may not be attainable given that only \ud835\udf0fcandidates can be chosen in the result; instead, if we were to choose the entire skyline, we would get \ud835\udc36\ud835\udc5c\ud835\udc63= 1.0 by design. Thus, the extent to which \ud835\udc36\ud835\udc5c\ud835\udc63(F ) (FiSH\u2019s coverage) approaches \ud835\udc36\ud835\udc5c\ud835\udc63(E) (coverage attained by the exact result) is a measure of FiSH\u2019s quality. Coverage, being modelled using pareto domination, may be seen as modelling pareto-ness of FiSH\u2019s result. 5.3 Diversity of Results Given that our formulation of the approximate \ud835\udf0f-dpe task hinges on the idea that the candidates should be diverse (so that they may embody a variety of different trade-off points), diversity is a key aspect to measure the adherence of the solution to the spirit of the approximate \ud835\udf0f-dpe task. We model diversity as the minimum among pairwise distances between candidates in F . \ud835\udc40\ud835\udc37(F ) = \ud835\udc5a\ud835\udc56\ud835\udc5b{\ud835\udc38\ud835\udc62\ud835\udc50\ud835\udc59(\ud835\udc39\ud835\udc56, \ud835\udc39\ud835\udc57)|{\ud835\udc39\ud835\udc56, \ud835\udc39\ud835\udc57} \u2286F, \ud835\udc39\ud835\udc56\u2260\ud835\udc39\ud835\udc57} (6) Unlike the average of pairwise distances that allows nearby pairs to be compensated by the existence of far away ones, this is a stricter measure of diversity. On the other hand, this is quite brittle, in the sense just one pair of results being proximal would cause \ud835\udc40\ud835\udc37(.) to go down significantly; in such cases, the \ud835\udc40\ud835\udc37(.) would not be that representative of the overall diversity in F . Hence, all the evaluation measures must be seen in cognisance of the others. Coming to desirable values of \ud835\udc40\ud835\udc37(.), we would like \ud835\udc40\ud835\udc37(F ), which measures the lower bound of distances among elements in F , to be as high as possible, and approach the diversity of E, i.e., \ud835\udc40\ud835\udc37(E). 5.4 Discussion As obvious from the construction, lower values of \ud835\udc37\ud835\udc36, and higher values on both \ud835\udc36\ud835\udc5c\ud835\udc63and \ud835\udc40\ud835\udc37indicate the quality of FiSH\u2019s approach. It is also to be seen that \ud835\udc36\ud835\udc5c\ud835\udc63and \ud835\udc40\ud835\udc37should be judged together, since it is easy to maximize coverage without being diverse and vice versa. \ud835\udc36\ud835\udc5c\ud835\udc63and \ud835\udc40\ud835\udc37requires all \ud835\udc5a\ud835\udc36\ud835\udc58subsets of S to be enumerated, whereas \ud835\udc37\ud835\udc36requires additionally that the exact \ud835\udf0f-dpe results be computed. This makes these evaluations feasible only in cases where such enumeration can be done, i.e., reasonably low values of \ud835\udc5a. In addition to the above quality measures, a key performance metric that FiSH seeks to optimize for, is the response time. 6 EXPERIMENTAL EVALUATION We now describe our empirical study evaluating FiSH. In this section, we describe the dataset used, the experimental setup, our evaluation measures and our experimental results. 6.1 Dataset and Experimental Setup 6.1.1 Dataset. We used the Indian Human Development Survey (IHDS)5 dataset, a large-scale survey of India\u2019s population conducted in 2011-12. In particular, we used a random sample of 10000 individuals from the data with distinct locations. The location (lat, long) was determined through querying Google Maps based on the district and other location information available in the data. The binary hotness attribute was chosen as either (i) (annual) income < 100k6, or (ii) education < 2 yrs. For each setting, we use caste and religion as sensitive attributes and low income/education as hot spot criterion. In other words, we would like to identify a 5https://ihds.umd.edu/data 6100k INR is approximately 1.35k$; India\u2019s per capita income is \u22482\ud835\udc58$ 6 \fSetting Method DC \u2193 Cov \u2191 MD \u2191 Time(s) \u2193 Income FiSH 0.112 0.995 0.034 23.11 Exact N/A 0.998 0.042 6536.54 Education FiSH 0.045 0.987 0.041 23.87 Exact N/A 0.997 0.081 4413.78 Table 2: Comparative Results (Task setting: \ud835\udf0f= 5, \ud835\udc58= 5, \ud835\udc5a= 20 and Parameter Setting: \ud835\udc4f= 5 for FiSH); arrows denote whether low or high values are desirable. Education \ud835\udc5a FiSH Exact 15 17.83 840.37 20 23.87 4413.78 25 39.46 33151.91 30 49.28 35 61.49 40 71.09 Table 3: Scalability Analysis: Running Time (in seconds) with varying \ud835\udc5a; Exact did not complete in reasonable time for \ud835\udc5a> 25. set of spatial hot spots such that the population across them fare poorly on income (education) but religion and caste groups are fairly represented. These choices of attributes for hotness and fairness are abundantly informed by social realities in contemporary India; for example, caste discrimination remains rampant across India, including in urban settlements7. 6.1.2 Experimental Setup. We used SaTScan Bernoulli model to discover hot spots. We implemented FiSH as well as the Exact \ud835\udf0f-dpe computation (i.e., enumerate all \ud835\udc5a\ud835\udc36\ud835\udc58subsets, find pareto efficient frontier, and identify\ud835\udf0fdiverse subsets) on Python 3 on an Intel 64 bit i5-8265 at 1.6 GHz with 8 GB RAM. Unless otherwise mentioned, we use the following parameter settings: \ud835\udc5a= 20, \ud835\udc58= 5 and \ud835\udf0f= \ud835\udc4f= 5. 6.2 Overall Comparison We performed extensive empirical analyses over varying settings. We present representative results and analyses herein. Table 2 illustrates a representative sample of the overall trends in the comparison between FiSH and Exact. The low values of \ud835\udc37\ud835\udc36indicate that FiSH\u2019s results are quite close to those of Exact, which is further illustrated by the trends on the Cov measure where FiSH follows Exact closely. For MD, we observe a 20% deterioration in the case of Income, and a 50% deterioration in the case of Education. We looked at the case of Education and found that the low value of MD for FiSH was due to one pair being quite similar (distance of 0.041), possibly a chance occurrence that coincided with this setting; the second least distance was more than three times higher, at 0.1349. On an average, the pairwise distances for FiSH was only 20% less than that for Exact. Across varying parameter settings, a 15-20% deterioration of MD was observed for FiSH vis-a-vis Exact. For the record, we note that the choice of first \ud835\udc58hot spots from S as the result yielded \ud835\udc37\ud835\udc36\u22480.8 and Cov 3 to 10 percentage points lower; this confirms that \ud835\udf0f-dpe task formulation is significantly different from top-k not just analytically, but empirically too. 7https://www.economist.com/asia/2020/07/23/even-as-india-urbanises-castediscrimination-remains-rife Apart from being able to approximate the Exact results well, FiSH is also seen to be able to generate results exceptionally faster, a key point to note given that bringing the \ud835\udf0f-dpe task into the realm of computational feasibility was our main motivation in devising FiSH. In particular, FiSH\u2019s sub-minute response times compare extremely favourably against those of Exact which is seen to take more than an hour; we will illustrate later that Exact scales poorly and rapidly becomes infeasible for usage within most practical real-life scenarios. The FiSH vs. Exact trends, reported in Table 2 is representative of results across variations in parameter settings. FiSH was consistently seen to record 0-10% deteriorations in Cov, around 15-25% deterioration in MD, and multiple orders of magnitude improvements in response time. The trends on the effectiveness measures as well as the response time underline the effectiveness of the design of the FiSH method. 6.3 Scalability Analysis With FiSH being designed for efficient computation of a reasonable approximation of \ud835\udf0f-dpe results, it is critical to ensure that FiSH scales with larger \ud835\udc5a; recall that \ud835\udc5a= |S|, the size of the initial list of hotspots chosen to work upon. Table 3 illustrates the FiSH and Exact response times with varying \ud835\udc5a. While Exact failed to complete in reasonable time (we set a timeout to 12 hours) for \ud835\udc5a> 25, FiSH was seen to scale well with \ud835\udc5a, producing results many orders of magnitude faster than Exact. In particular, it was seen to finish its computation in a few minutes even for \ud835\udc5a\u2248100, which is highly promising in terms of applicability for practical scenarios. Similar trends were obtained with scalability with higher values of \ud835\udc58and \ud835\udf0f; Exact quickly becomes infeasible, whereas FiSH\u2019s response time grows gradually. 6.4 Analysis over Varying Settings We now analyze the performance of FiSH in varying settings. This analysis helps us evaluate the sensitivity of FiSH to specific parameter values; for example, smooth movements along small variations in parameter values will help build confidence in the utility of FiSH in varying scenarios. With Exact being unable to complete running within reasonable amounts of time for higher search spaces (e.g., \ud835\udc5a> 25, \ud835\udc58= 7, \ud835\udf0f> 5 etc.), we restrict our attention to FiSH trends over Cov and MD; this is so since results from Exact are necessary to compute the DC measure. Among Cov and MD, our expectation is that the brittleness of the MD measure, as noted in Section 5.3, could lead to more fluctuations in MD when compared to Cov, even when FiSH results change only gradually. We now study the trends with varying parameter settings, changing parameters one at a time, keeping all parameters at their reference settings from Section 6.1.2, except the one being varied. 6.4.1 Varying \ud835\udc5a. We now analyze the effectiveness of FiSH when operating over a larger set of SaTScan results, i.e., with larger values of \ud835\udc5a(recall \ud835\udc5a= |S|). With the number of points in the N-F space being \ud835\udc5a\ud835\udc36\ud835\udc58, increases in \ud835\udc5alead rapidly to much denser N-F spaces, and correspondingly larger search spaces. We vary \ud835\udc5afrom 15 to 30 in steps of 5; the Cov and MD trends appear in Figure 4 and Figure 7 respectively. As expected, Cov consistently remains at values higher than 0.985, whereas there is higher volatility in the case of \ud835\udc40\ud835\udc37. The trends indicate that FiSH is not highly sensitive to \ud835\udc5aand the quality of its results varies gradually with varying values of \ud835\udc5a. 7 \fFigure 4: Cov vs. m Figure 5: Cov vs. b,tau Figure 6: Cov vs. k Figure 7: MD vs. m Figure 8: MD vs. b,tau Figure 9: MD vs. k 6.4.2 Varying \ud835\udf0f. The number of trade-off points that is provided to the user, or \ud835\udf0f, is another important parameter in the \ud835\udf0f-dpe task. The beam size in FiSH, as observed earlier in Section 5.4, is intimately related to \ud835\udf0f, and may be expected to be set such that \ud835\udc4f\u2265\ud835\udf0f. Higher values of \ud835\udc4fyield better results at the cost of slower responses; we consistently set \ud835\udc4f= \ud835\udf0fin our result quality analysis. Higher values of \ud835\udf0fenable choosing more points from the N-F space in the output, and this provides an opportunity to improve on Cov. However, choosing more points obviously would lead to deterioration in the MD measure that measures the minimum of pairwise distances. We vary \ud835\udf0f(and thus \ud835\udc4f) from 3 to 7, and plot the Cov and MD trends in Figures 5 and 8 respectively, which show gentle and consistent variations. As expected, Cov is seen to improve and saturate close to the upper bound of 1.0. MD on the other hand, is seen to deteriorate but stabilizes soon; the patterns are consistent except for the case of \ud835\udf0f= 5 for Education, likely a chance occurrence as analyzed in Section 6.2. 6.4.3 Varying \ud835\udc58. The third parameter of importance for the \ud835\udf0f-dpe task is \ud835\udc58, which denotes the number of hotspots to be chosen within each trade-off point in the result. Increasing values of \ud835\udc58(up to \ud835\udc5a/2) lead to larger number of points in the N-F space. With the number of trade-off points to be output pegged at \ud835\udf0f, achieving the same coverage would become harder with increasing\ud835\udc58. This is in contrast with \ud835\udc40\ud835\udc37where there is no expectation of a consistent deterioration or improvement. From the Cov and MD plots in Figures 6 and 9, the Cov is quite stable with a deterioration kicking in at \ud835\udc58= 7 (even there, \ud835\udc36\ud835\udc5c\ud835\udc63remains at 0.90+), whereas MD remains consistent. 6.4.4 Setting \ud835\udc4f. The beam width, \ud835\udc4fin FiSH, offers a mechanism to trade-off effectiveness for efficiency. We experimented with varying values of \ud835\udc4fand found that the gains on effectiveness measures (i.e., DC, Cov and MD) taper off beyond \ud835\udc4f> 2 \u00d7 \ud835\udf0f. The response times were seen to increase with \ud835\udc4f; there are two ways in which \ud835\udc4faffects the complexity, one is by providing more candidates at each level Figure 10: Example Results; kindly view in color. it FiSH results in green and Exact results in mustard yellow. (which increases linearly with \ud835\udc4f), and another by increasing the cost of pareto frontier identification (which is in O(\ud835\udc4f2)). From the trends which indicated a linear trend between response time and \ud835\udc4f, it may be reasonably suspected that the former factor dominates. 6.5 Example Results in the N-F Space Having analyzed FiSH quantitatively, we now consider a qualitative evaluation of FiSH vis-a-vis Exact. Fig 10 illustrates the N-F space for our reference setting (Section 6.1.2) for Income, with results from FiSH (green points) juxtaposed against Exact results (mustard yellow) and other points in red. This result is representative of FiSH\u2019s strengths and weaknesses. While three of five FiSH results are seen to be on the pareto frontier, the others are only slightly inward. As in the case of any heuristic-driven method, FiSH may miss some good results; here, FiSH\u2019s sampling misses out on the 8 \ftop-left region of the pareto frontier, which explains the slight deterioration in Cov for FiSH when compared with Exact. 7" + }, + { + "url": "http://arxiv.org/abs/2010.05353v1", + "title": "Local Connectivity in Centroid Clustering", + "abstract": "Clustering is a fundamental task in unsupervised learning, one that targets\nto group a dataset into clusters of similar objects. There has been recent\ninterest in embedding normative considerations around fairness within\nclustering formulations. In this paper, we propose 'local connectivity' as a\ncrucial factor in assessing membership desert in centroid clustering. We use\nlocal connectivity to refer to the support offered by the local neighborhood of\nan object towards supporting its membership to the cluster in question. We\nmotivate the need to consider local connectivity of objects in cluster\nassignment, and provide ways to quantify local connectivity in a given\nclustering. We then exploit concepts from density-based clustering and devise\nLOFKM, a clustering method that seeks to deepen local connectivity in\nclustering outputs, while staying within the framework of centroid clustering.\nThrough an empirical evaluation over real-world datasets, we illustrate that\nLOFKM achieves notable improvements in local connectivity at reasonable costs\nto clustering quality, illustrating the effectiveness of the method.", + "authors": "Deepak P", + "published": "2020-10-11", + "updated": "2020-10-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "INTRODUCTION Clustering [12] has been a popular task in unsupervised learning. Clustering involves grouping a dataset of objects into a number of groups such that objects that are highly similar to one another are more likely to find themselves assigned to the same group, and vice versa. Clustering algorithms fall into one of many families, of which partitional and hierarchical algorithms are two main streams. Partitional clustering, arguably the more popular stream, considers grouping the dataset into a number of disjoint sets. The pioneering work in this family, \ud835\udc3e-Means clustering, dates back to the 1960s [16]. \ud835\udc3e-Means clustering is a partitional clustering algorithm that additionally outputs a prototypical object to \u2019represent\u2019 each cluster, which happens to simply be the cluster centroid within the basic \ud835\udc3e-Means formulation. The centroid output is often seen Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. IDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea \u00a9 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-7503-0/20/06...$15.00 https://doi.org/10.1145/3410566.3410601 as very useful for scenarios such as for manual perusal to ascertain cluster characteristics, resulting in this paradigm of \u2019centroid clustering\u2019 [23] attracting much research interest. In alternative formulations within the centroid clustering paradigm, the prototypical object is set to be the medoid, which is a dataset object that is most centrally positioned; this is referred to as \ud835\udc3e-medoids [20] clustering or PAM1. 50+ years since \ud835\udc3e-Means, the basic \ud835\udc3e-Means formulation is still used widely and continues to inspire much clustering research [11]. The second popular family of clustering algorithms, that of hierarchical clustering, focuses on generating a hierarchy of clusters from which clusterings of differing granularities can be extracted. An early survey of hierarchical clustering methods appears at [17]. Our focus in this paper is within the task of centroid clustering. 1.1 Membership Desert in Centroid Clustering In this paper, we problematize the notion of cluster membership in centroid clustering from a conceptual and normative perspective. Our work is situated within the context of recent interest in fairness and ethics in machine learning (e.g., [15]), which focuses on embedding normative principles within data science algorithms in order to align them better with values in the modern society. In particular, we consider the question of membership desert, or what it means for an object to be deserving of being a member of a cluster, or a cluster to be deserving of containing a data object. Desert in philosophical literature2 refers to the condition of being deserving of something; a detailed exposition of philosophical debate on the topic can be found within a topical encyclopaedia from Stanford3. \ud835\udc3e-Means and most other formulations that build upon it have used a fairly simple notion of membership desert; that an object be assigned to the cluster to whose prototype it is most proximal, according to a task-relevant notion of similarity. While this simple notion makes intuitive sense as well as enables convenient optimization, it admits unintuitive outcomes as we will see later. There have been two recent works in re-considering membership desert in centroid clustering, both within the umbrella of research in fair machine learning. The first work [8] considers a notion of collective desert to blend in with the \ud835\udc3e-Means framework, whereby a reasonably large set of objects is considered to be deserving of their own cluster as long as they are collectively proximal to one another. The second work [19] considers the distance-to-centroid as a cost of abstraction incurred by objects in the dataset, and strives to achieve a fair distribution of the cost of abstraction across objects. We will discuss these in detail in a later section. In this work, we consider advancing a third distinct normative consideration in membership desert, that of local connectivity. At the high level, we consider the membership desert associated with an object-cluster pair as being 1https://en.wikipedia.org/wiki/K-medoids 2https://en.wikipedia.org/wiki/Desert_(philosophy) 3https://plato.stanford.edu/entries/desert/ arXiv:2010.05353v1 [cs.LG] 11 Oct 2020 \fIDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea Deepak P intimately related to the extent of the object\u2019s neighbors\u2019 affinity towards the cluster in question. 1.2 Our Contributions In what may be seen as a contrast to conventional research narratives within data analytics, our work is centered on advancing a particular normative consideration as opposed to a technological challenge. This is in line with recent work on fairness and ethics in AI, which have mostly appeared within data analytics avenues as well (e.g., [1, 5, 8]). Our contribution by way of this work is three-fold: \u2022 Local Connectivity as Membership Desert: We develop an argument for considering local connectivity as a notion of membership desert in centroid clustering. Building upon this argument, we develop quantitative metrics to evaluate the extent to which local connectivity is being adhered to, within a clustering. \u2022 LOFKM: We develop a simple centroid clustering formulation, LOFKM, drawing inspiration from both centroid clustering and density-based clustering, that deepens local connectivity in clustering outputs. \u2022 Evaluation: Through an empirical evaluation over multiple realworld datasets, we illustrate that LOFKM is able to significantly improve alignment with local connectivity considerations at reasonable costs to clustering quality. Roadmap: We start by considering related work in Section 2, followed by an overview of membership desert in Section 3. This is followed by Section 4 where we describe local connectivity as a distinct notion of membership desert and ways of quantifying it for a given clustering. Section 5 outlines a simple method for enhancing local connectivity in centroid clustering, codenamed \ud835\udc3f\ud835\udc42\ud835\udc39\ud835\udc3e\ud835\udc40. This is followed by our experimental evaluation in Section 6, a brief discussion in Section 7 and conclusions in Section 8. 2 RELATED WORK Given that our work advances a local neighborhood based normative consideration in clustering, we briefly summarize related work from (i) fair clustering, and (ii) local neighborhood estimations from the density-based clustering family. 2.1 Fair Clustering There has been an emerging interest in fair clustering. Among the two notions of fairness, individual and group fairness [6], fair clustering has largely seen explorations on the latter. Group fairness involves ensuring cluster-level representational parity of sensitive groups defined on attributes such as gender, race, ethnicity and marital status. This literature, initiated by a work on ingenious dataset pre-processing [9], has seen work on embedding fairness within the optimization [1] as well as in post-processing [5]. These also differ in the number of types of sensitive attributes that they admit. An overview of recent work on group-fair clustering appears in [1] (Ref. Table 1 therein). Research into individual fairness in clustering has a flavour of considering membership desert as the focus question; being pertinent to our work, we discuss this in detail in Section 3. 2.2 Local Neighborhood and Clustering Local neighborhood of objects has been the core consideration in work on density-based clustering, a field pioneered by the DBSCAN clustering algorithm [10], followed by OPTICS [2]. In our work, we will make use of a work that extends concepts from density-based clustering in order to identify the outlierness of dataset objects, called Local Outlier Factor (LOF) [7]. The structure of LOF relies on quantifying the local density around an object. The local density around an object is inversely related to the average reachability of the object to its \ud835\udc58nearest neighbors; with reachability being a lowerbounded version of distance between the objects. The local density around an object\u2019s neighbors is then contrasted with the object\u2019s own local density to arrive at the LOF, which is a non-negative real number. \ud835\udc3f\ud835\udc42\ud835\udc39> 1 (\ud835\udc3f\ud835\udc42\ud835\udc39< 1) is achieved by objects whose neighbors are in neighborhoods that are denser (sparser) than it\u2019s own, with \ud835\udc3f\ud835\udc42\ud835\udc39= 1 indicating a good match between respective densities. Objects with high values of \ud835\udc3f\ud835\udc42\ud835\udc39, especially \ud835\udc3f\ud835\udc42\ud835\udc39>> 1, are considered density-based outliers, due to their (relative) lack of closeby neighbors. Over the past two decades, LOF has evolved to being a very popular outlier detection method, continuously inspiring systems work on improving efficiency (e.g.,a recent fast LOF work appears in [3]), arguably adorning a place in the outlier detection literature only next to the analogous status of \ud835\udc3e-Means within clustering literature. 3 BACKGROUND: MEMBERSHIP DESERT IN CENTROID CLUSTERING Following up from Section 1.1, we now cover more background on the notion of membership desert in \ud835\udc3e-Means, and recent fairness oriented re-considerations of the notion. 3.1 Critiquing K-Means\u2019 Membership Desert Let us start with looking at the simple notion of membership desert used in \ud835\udc3e-Means, that an object deserves to be assigned to the cluster whose prototype4 it is most proximal to, proximity measured under a domain-specific notion of (dis)similarity that is deemed relevant to the clustering task. First, consider the case of two clusters, \ud835\udc34and \ud835\udc35. Now, let an object \ud835\udc4b1 be at a distance of 3 and 5 units from the prototypes of \ud835\udc34and \ud835\udc35respectively, as shown roughly in the first illustration in Fig 1. For another object \ud835\udc4b2, also shown in the illustration, let the distances be 8 and 6 respectively. The simple \ud835\udc3e-Means (argmin) heuristic does the following assignment: \ud835\udc4b1 \u2208\ud835\udc34 and \ud835\udc4b2 \u2208\ud835\udc35. It may be noted that while considering proximity as membership desert as in \ud835\udc3e-Means, \ud835\udc4b1 may be considered more deserving of being assigned to \ud835\udc35than \ud835\udc4b2 is to \ud835\udc35; this is so since \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b1, \ud835\udc35) < \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b2, \ud835\udc35). However, the \ud835\udc3e-Means assignment is in conflict with this observation, due to the higher degree of proximity of \ud835\udc4b1 to \ud835\udc34. Second, consider a scenario with respect to the trio, \ud835\udc4b1 in relation to \ud835\udc34and \ud835\udc35, as shown in the right-side in Figure 1. Let \ud835\udc35 be a naturally bigger and denser cluster with significant number of data objects within 6 units of distance of it. On the other hand, let \ud835\udc34be a small cluster with most of its members being within 2 units of distance around its prototype. In this setting, despite \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b1,\ud835\udc34) < \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b1, \ud835\udc35), \ud835\udc4b1 may be thought of as deserving of 4we use prototype and centroid interchangeably \fLocal Connectivity in Centroid Clustering IDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea Figure 1: Two Cases for Section 3.1: Rough Illustration being located within \ud835\udc35since it is in the company of the large mass of points stretching to the proximity of \ud835\udc35. This intuitive notion of membership desert also conflicts with the cluster assignment that \ud835\udc3e-Means does. In fact, this is also an fallout of a fundamental design assumption in \ud835\udc3e-Means, that clusters be modelled as being modeled as Voronoi cells. While we do *not* argue that the \ud835\udc3e-Means choice is inferior to an alternative available choice, it may be seen that there are intuitive opportunities to critique the simple membership desert mechanism in \ud835\udc3e-Means, and that the choice of most proximal centroid is not the only natural choice. It is also noteworthy that membership assignment is not a final \ud835\udc3e-Means step, making it not entirely appropriate to consider it in isolation as we have done so far. The cluster assignment step is interleaved with the centroid learning step, leading to an interplay of effects of each other. 3.2 Fairness-orientated Notions of Membership As outlined earlier, there are two recent papers, that motivate different considerations in cluster membership assignment. 3.2.1 Proportionality [8] or Collective Desert in Cluster Membership. \ud835\udc3e-Means uses a parameter, the number of expected clusters in the output, commonly denoted as \ud835\udc3e. Thus, on an average, there are (\ud835\udc5b/\ud835\udc3e) objects in a \ud835\udc3e-Means cluster. Proportionality, a concept the authors propose, is the notion that if one can find a set of \u2308\ud835\udc5b/\ud835\udc3e\u2309 data objects that collectively prefer the same candidate centroid in lieu of their current assignments (which involve different centroids/clusters), they deserve a cluster of their own centered at the candidate centroid that they collectively prefer. A clustering would be regarded as violating proportionality if it involves denying this set of \u2308\ud835\udc5b/\ud835\udc3e\u2309objects their own cluster that they deserve. They develop algorithms that generate proportionally fair clusterings, those that do not violate proportionality. 3.2.2 Representativity Fairness [19]. A recent work considers humanin-the-loop analytics pipelines where each cluster centroid is perused in order to arrive at a single decision for all objects in the cluster. Within such pipelines and even more generally, objects that are far away from their assigned cluster centroids suffer a higher \u2019representativity cost\u2019 from the cluster-level abstraction of the dataset provided by the clustering. RFKM, the proposed method, seeks to level off this object-level cost across the objects in the dataset, and move towards what is called representativity fairness. Operationally, it considers re-engineering the \ud835\udc3e-Means steps in a way that chances of proximity violations such as those in the first example in Section 3.1 are reduced. 4 LOCAL CONNECTIVITY AND MEMBERSHIP DESERT 4.1 Motivation We first consider local connectivity as a concept and its relevance to membership desert in centroid clustering. Consider three motivating scenarios in Fig. 2. In each of these figures, the middle point is the designated cluster prototype for the blue cluster; in other words, we have zoomed in on the blue cluster prototype and excluded other points in the dataset (including those from blue or other clusters) from view. The other blue colored points are assigned to be part of the blue cluster, and the red colored points in Fig. 2(a) are part of a different (red) cluster. In each of these figures, we would like to consider the status of the black colored object, and how well it deserves to be part of the blue cluster, and thus to being \u2019represented\u2019 by the blue cluster\u2019s prototype in the cluster-level abstraction. Fig 2(a) has the corresponding black object being closest to the blue cluster prototype among all three scenarios. However, its local neighborhood (think of it as perhaps the closest few data objects to itself) is largely from the red cluster. Intuitively, this makes it reasonable to argue that despite the proximity, the black object in Fig 2(a) is limited in how well it deserves to be part of the blue cluster; in other words, its membership desert to the blue cluster comes under question. Now, consider the scenario in Fig 2(b). The black object, while not as proximal as in the case of Fig. 2(a), is quite well connected to the blue cluster given that it has an \u2019pull\u2019 from its local neighborhood towards the blue cluster. This makes it more deserving of membership to the blue cluster. Lastly, consider Fig 2(c) where the black object is tucked into a corner within a sparse region of the space. It has a reasonable claim to membership in the blue cluster, due to its nearest neighbors being blue (despite them being quite far from itself); however, the strength of the claim is dented by its distance to the blue cluster prototype. In summary, we observe the following: \u2022 Fig 2(a): Despite proximity, the membership desert of the black object to the blue cluster is limited due to the local neighborhood being red. \u2022 Fig 2(b): The black object is most deserving to be part of the blue cluster due to high local connectivity within the blue cluster and reasonable proximity to the blue cluster prototype. \fIDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea Deepak P Figure 2: Local Connectivity: Motivating Scenarios (best viewed in color) \u2022 Fig 2(c): The black object may be considered as reasonably deserving of blue cluster membership, even though its distance from the blue cluster prototype reduces the strength of the claim. In other words, these illustrative scenarios offer different tradeoffs between the pull towards the blue cluster prototype offered by local connectivity and proximity. These, we hope, illustrates that local neighborhood connectivity to the cluster in question is a fairly crucial factor in assessing membership desert. Though we have used abstract examples to motivate local connectivity, this has real-world implications wherever clustering is used for consequential tasks; for a simple example, consider centroid clustering being used for facility location to determine locations of service facilities (e.g., post offices or hospitals) with people represented using their geographic co-ordinates. In facility location, assigning a person to a facility (located at a centroid) towards which she has few local neighbors may be seen as unjust as well as a decision that undermines social solidarity. While \ud835\udc3e-Means is evidently not directly accommodative of local connectivity considerations due to using proximity in cluster assignment, the family of density based clustering algorithms pioneered by DBSCAN [10, 22] makes local neighborhood a prime consideration in forming clusters. However, the density-based clustering family does not offer a convenient prototype for each cluster, and is thus limited in its applicability to human-in-the-loop pipelines such as those outlined in [19]. In particular, density-based clusterings could yield non-convex clusters, where the centroid computed over cluster objects could be situated outside the natural boundaries of the cluster. Our method, as we will see, will leverage concepts from local neighborhood assessments from the density-based clustering family, and use that within the framework of centroid clustering inspired by \ud835\udc3e-Means. 4.2 Quantifying Local Connectivity Local connectivity in cluster membership desert, as illustrated in the previous section, can be thought of as: how well the local neighborhood of the data object supports its membership to the cluster in question. We now consider quantifying local connectivity at the object level, which will be aggregated to the level of different clusters in order to arrive at a measure of how well local connectivity is adhered to, in a given clustering. This quantification would form an evaluation metric for assessing local connectivity in clustering. Consider an object whose cluster-specific local neighborhood is conceptualized as the set of its \ud835\udc61nearest neighbors (we use \ud835\udc61 instead of the conventional \ud835\udc58to avoid conflict with the \ud835\udc3ein \ud835\udc3eMeans) within the cluster in question. We would like the \ud835\udc61nearest neighbors to comprise objects that: \u2022 Offer a Cluster Pull: We would like the neighbors to offer a pull in the direction towards the cluster prototype. While pull is admittedly an informal word, we believe it is fairly straightforward to interpret the meaning. To illustrate this notion, observe that the local neighborhood in Fig 2(a) was largely red objects which may be seen as pulling the object towards the red cluster. This is in sharp contrast with the local neighborhood pull towards the blue cluster in Fig 2(b). \u2022 Are Proximal to the Object: Even if the \ud835\udc61nearest neighbors are towards the cluster prototype and can be seen as offering a pull, such a pull is meaningless unless the neighbors are proximal to the object in question. For example, consider Fig 2(c) where the neighbors of the black object are all towards the blue cluster. However, the appeal of this pull is dented by the fact that the neighbors are quite distant from the black object. We now quantify the above desired characteristics in the form of a quantitative measure, for a given clustering. Let \ud835\udc4bbe the data object in question, and \ud835\udc36be the cluster prototype to whom the local connectivity strength is to be estimated. The dataset of objects involved in the clustering is denoted as X. Given our interest in quantifying the pull towards the cluster prototype, we first identify the set of \ud835\udc61nearest neighbors of \ud835\udc4bthat are both: (i) members of the cluster in question i.e., \ud835\udc36, and (ii) lie in between \ud835\udc4band the cluster prototype for \ud835\udc36. This set is denoted as \ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b): \ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b) = arg min \ud835\udc46\u2286\ud835\udc36\u2227\ud835\udc46\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc60\ud835\udc53\ud835\udc56\ud835\udc52\ud835\udc60(\ud835\udc46,\ud835\udc4b,\ud835\udc36)\u2227|\ud835\udc46|=\ud835\udc61 \u2211\ufe01 \ud835\udc60\u2208\ud835\udc46 \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc60,\ud835\udc4b) (1) where: \ud835\udc46\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc60\ud835\udc53\ud835\udc56\ud835\udc52\ud835\udc60(\ud835\udc46,\ud835\udc4b,\ud835\udc36) = \u00db \ud835\udc60\u2208\ud835\udc46 (\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc60,\ud835\udc36) \u2264\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc36)) \u2227(\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc60) < \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc36)) (2) \ud835\udc46\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc60\ud835\udc53\ud835\udc56\ud835\udc52\ud835\udc60(., ., .) enforces the condition that objects in \ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b) fall in between \ud835\udc36and \ud835\udc4bthrough a distance check; the first distance condition checks whether each element \ud835\udc60is closer to \ud835\udc36, and the second checks whether it is on the \u2019same side\u2019 of \ud835\udc36as \ud835\udc4bis. Among objects that satisfy these conditions, \ud835\udc61of them that are most proximal to \ud835\udc4bare chosen to form the set \ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b). It may be noted that in cases where there are not enough objects that satisfy the eligibility \fLocal Connectivity in Centroid Clustering IDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea Figure 3: Quantifying Local Conenctivity Illustration condition, |\ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b)| may be less than \ud835\udc61. This is likely to happen when \ud835\udc36is very close to \ud835\udc4b; we will outline its implications later. Our interest is now in assessing how well objects in \ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b) adhere to the pull and proximity heuristics outlined above. We use a simple geometric intuition in order to quantify these. Consider Figure 3 where \ud835\udc4bis the black object and \ud835\udc36is the big blue encircled object, as before. The small blue object is \ud835\udc60\u2208\ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b). Consider the line joining \ud835\udc4band \ud835\udc36and \ud835\udc60shown as being projected on to the line. The pull heuristic would prefer the dotted line indicating the projection of \ud835\udc60to the line to be as short as possible since that would direct the pull offered by \ud835\udc60to be aligned towards \ud835\udc36. The proximity heuristic, on the other hand, would prefer \ud835\udc60to be as close as possible to\ud835\udc4b, thus preferring that both the dotted lines be as short as possible. We would additionally like the local connectivity to be comparable across different data objects in X. Thus, we measure the two distances indirectly in relation to the distance between \ud835\udc4b and \ud835\udc36, as two measures, Deviation (Dev) and Normalized Distance (ND), as follows: \ud835\udc37\ud835\udc52\ud835\udc63(\ud835\udc4b,\ud835\udc36,\ud835\udc60) = \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc36,\ud835\udc60) + \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc60,\ud835\udc4b) \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc36) \u22121.0 (3) \ud835\udc41\ud835\udc37(\ud835\udc4b,\ud835\udc36,\ud835\udc60) = \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc60) \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc36,\ud835\udc60) + \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc60,\ud835\udc4b) (4) \ud835\udc37\ud835\udc52\ud835\udc63(\ud835\udc4b,\ud835\udc36,\ud835\udc60) would evaluate to 0.0 when \ud835\udc60falls directly on the line connecting \ud835\udc4band \ud835\udc36, since that would ensure that \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc36,\ud835\udc60) + \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc60,\ud835\udc4b) = \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc36). \ud835\udc37\ud835\udc52\ud835\udc63(\ud835\udc4b,\ud835\udc36,\ud835\udc60) increases the more \ud835\udc60deviates from that line, leading to its name. \ud835\udc41\ud835\udc37(\ud835\udc4b,\ud835\udc36,\ud835\udc60) on the other hand, measures the distance between \ud835\udc4band \ud835\udc60as a fraction of the distance between \ud835\udc4band \ud835\udc36through \ud835\udc60. Thus, \ud835\udc41\ud835\udc37(., ., .), unlike \ud835\udc37\ud835\udc52\ud835\udc63(., ., .) is directly related to the length of both dotted lines in Fig 3. Since we would like both of these measures to be numerically small (\u22480), we would like to minimize the product of these, which we call as the local connectivity disagreement measure: \ud835\udc3f\ud835\udc36\ud835\udc37(\ud835\udc4b,\ud835\udc36,\ud835\udc60) = \ud835\udc37\ud835\udc52\ud835\udc63(\ud835\udc4b,\ud835\udc36,\ud835\udc60) \u00d7 \ud835\udc41\ud835\udc37(\ud835\udc4b,\ud835\udc36,\ud835\udc60) (5) Higher values of \ud835\udc3f\ud835\udc36\ud835\udc37() denote lower levels of local connectivity offered by \ud835\udc60to support the membership desert for the pair \ud835\udc4b,\ud835\udc36. This disagreement may be aggregated across all objects in \ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b) to arrive at an object level estimate: \ud835\udc3f\ud835\udc36\ud835\udc37(\ud835\udc4b,\ud835\udc36) = \u2211\ufe01 \ud835\udc60\u2208\ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b) \ud835\udc3f\ud835\udc36\ud835\udc37(\ud835\udc4b,\ud835\udc36,\ud835\udc60) (6) When |\ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b)| < \ud835\udc61, the \ud835\udc3f\ud835\udc36\ud835\udc37would be correspondingly lower since there are fewer objects to sum over. Since we expect |\ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b)| < \ud835\udc61to happen when \ud835\udc4bis already very close to \ud835\udc36, this translates to an alternative route to reduce \ud835\udc3f\ud835\udc36\ud835\udc37for such objects; in addition to improving local connectivity by way of neighbors\u2019 positions, \ud835\udc3f\ud835\udc36\ud835\udc37 can also be improved (i.e., numerically reduced) through enhanced proximity between objects and their cluster prototypes, which would lead to smaller |\ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b)|. Among objects in the cluster \ud835\udc36, some may have high LCDs and some may have lower values for LCD. Towards assessing a cluster, consider using the average of the LCDs across all objects as an aggregate measure. This would enable a small set of objects with very shallow local connectivity (i.e., high LCD scores since LCD measures disagreement) to be ignored due to being compensated by a large number of low LCD scores across other objects in the cluster. This may be considered undesirable in the face of the high importance accorded to the concern for the most disadvantaged, such as in the very popular stream of Rawlsian notions of fairness [13]. Motivated by such considerations, we accord the cluster with an LCD value computed as the highest LCD (i.e., lowest local connectivity, since LCD measures disagreement) among its objects: \ud835\udc3f\ud835\udc36\ud835\udc37(\ud835\udc36) = max \ud835\udc4b\u2208\ud835\udc36\ud835\udc3f\ud835\udc36\ud835\udc37(\ud835\udc4b,\ud835\udc36) (7) A clustering of a dataset would produce multiple clusters, since a clustering defines a partitioning. In order to arrive at a dataset-level measure of connectivity offered by a clustering, we would need an aggregate statistic at the dataset level. As in the case above, we would like to ensure that no cluster suffers from bad local connectivity, making the highest LCD among clusters a natural measure to minimize. We call this MaxLCD. Additionally, We would also like to minimize LCD across all clusters, making AvgLCD a very pertinent measure. \ud835\udc40\ud835\udc4e\ud835\udc65\ud835\udc3f\ud835\udc36\ud835\udc37(C) = max \ud835\udc36\u2208C \ud835\udc3f\ud835\udc36\ud835\udc37(\ud835\udc36) (8) \ud835\udc34\ud835\udc63\ud835\udc54\ud835\udc3f\ud835\udc36\ud835\udc37(C) = 1 |C| \u2211\ufe01 \ud835\udc36\u2208C \ud835\udc3f\ud835\udc36\ud835\udc37(\ud835\udc36) (9) These are analogous to the construction of Max Wasserstein and Avg Wasserstein used in evaluation of fair clustering [1]. Thus, MaxLCD and AvgLCD offer quantifications of disagreement with local connectivity across the dataset, as manifested in the clustering C. A good clustering would be one which, in addition to performing well on traditional clustering evaluation metrics such as purity and silhoutte, achieves low values of MaxLCD and AvgLCD (thus, high local connectivity). 4.3 Drawbacks of LCD Measures While LCD measures are, we believe, a starting point for quantifying local connectivity, these are not free of shortcomings. We outline a few drawbacks, which could potentially point to ways of refining them to yield better metrics of local connectivity. First, both \ud835\udc37\ud835\udc52\ud835\udc63(.) and \ud835\udc41\ud835\udc37(.), which form the building blocks of LCD measures, rely on distances expressed as fractions of other distances. This makes them unable to be sensitive to variations in absolute distances. Consider the case of \ud835\udc37\ud835\udc52\ud835\udc63(.); when \ud835\udc4band \ud835\udc36are close to each other, even slight deviations of \ud835\udc60from the straight line connecting them are amplified, with \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc36) forming the denominator. Similarly, take the case of \ud835\udc41\ud835\udc37(.); high values of \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc36,\ud835\udc60) push it towards 0.0 by providing a very high denominator. When \ud835\udc4b \fIDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea Deepak P and \ud835\udc60are very far from \ud835\udc36, even high values of \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b,\ud835\udc60) could cause \ud835\udc41\ud835\udc37(.) \u22480. Such cases make \ud835\udc3f\ud835\udc36\ud835\udc37less meaningful to quantify the connectivity of fringe objects that are far from cluster prototypes. Any attempts at addressing such absolute distance issues should also care to retain the comparability of the resultant metrics across objects in the dataset. Second, we have excluded neighbors of \ud835\udc4b that do not belong to the same cluster as \ud835\udc36, from consideration in \ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b). This means that an object\u2019s neighbors\u2019 pull towards the assigned cluster is evaluated without regard to whether it has similar or stronger pulls towards other clusters. This, we believe, is a minor issue, since such stronger pulls towards a different cluster also would likely reduce cluster coherence in general. This means that any clustering that attempts to improve coherence of clusters in addition to local connectivity (such as our method, \ud835\udc3f\ud835\udc42\ud835\udc39\ud835\udc3e\ud835\udc40, introduced later) would address this implicitly to some extent using the cluster coherence criterion. The above two sets of drawbacks are not meant to be comprehensive but to serve to provide a flavour of the possibilities of improving upon LCD measures, and the challenges in those directions. 5 LOFKM: ENHANCING LOCAL CONNECTIVITY IN CLUSTERING We have argued and motivated that local connectivity is a crucial factor in considering membership desert for an object to a cluster. Local neighborhood statistics has been extensively used in the stream of work on density-based clustering, initiated through the popular DBSCAN clustering method [10]. Density-based clustering has the ability to identify clusters that have non-convex shapes (e.g., can disambiguate star and crescent5 as separate clusters) and overlapping convex shapes (e.g., can identify rings arranged concentrically as separate clusters). However, this ability comes at a cost; density-based clustering inherently lacks the possibility of choosing a meaningful representative prototype for a cluster (e.g., in the above cases, observe that the centroid would lie outside the cluster itself and would be meaningless as a prototype). Our method, LOFKM, is the result of an attempt to bring a density-based flavour within \ud835\udc3e-Means framework, in order to improve local conncectivity considerations. Our design considerations are as follows: \u2022 Conceptual Simplicity: We would like to retain the conceptual simplicity inherent in \ud835\udc3e-Means which has likely been at the core of it\u2019s widespread popularity. Additionally, we would like to bring in density-based concepts within it in a lucid manner. \u2022 Computational Convenience: The task of clustering is a datasetlevel optimization problem which has inherent complexities. This makes directly using local connectivity measures (e.g., LCD) within the optimization infeasible. Due to solving a computational task, computational convenience is also a significant consideration. 5.1 Towards a Method As we have seen, local connectivity involves a relation between an object and a cluster prototype in the backdrop of the local neighborhood of the object in the \u2019direction\u2019 of the cluster prototype. It 5https://en.wikipedia.org/wiki/Star_and_crescent is important to note that the local neighborhood of an object is a property of its location within the similarity space provided by the pre-specified [\ud835\udc51\ud835\udc4e\ud835\udc61\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc61,\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc5b\ud835\udc50\ud835\udc52\ud835\udc53\ud835\udc62\ud835\udc5b\ud835\udc50\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b] pair, and is in no way \u2019alterable\u2019 to nudge clustering towards deepening local connectivity (or any other consideration, for that matter). High \ud835\udc3f\ud835\udc42\ud835\udc39(Ref. Sec 2.2) objects are more likely to suffer from shallow local connectivity since their neighborhood is sparse; so the neighbors are unlikely to support their membership to any cluster by much. One way to enhance local connectivity would be through better inlineness, which would be to set cluster prototypes in such directions from high LOF objects within which they have many neighbors. This, however, would require a significantly different prototype construction, putting the conceptual simplicity of \ud835\udc3e-Means prototype estimation at risk. Yet another way would be to bring the cluster prototype towards such high \ud835\udc3f\ud835\udc42\ud835\udc39objects, which would enhance their connectivity through both support from neighborhood as well as lower |\ud835\udc41\ud835\udc36 \ud835\udc61(\ud835\udc4b)|. This route is amenable to exploration while staying within the framework of the \ud835\udc3e-Means clustering formulation, and forms the basis of our LOFKM method. However, it risks bringing down the compactness of the cluster, which is a factor that would have repercussions on other metrics such as cluster purity and silhoutte as well. As obvious, deepening a particular normative consideration in any machine learning task is expected to introduce constraints that would reduce the clustering quality overall; in other words, higher local connectivity is not expected to come \u2019for free\u2019. A good clustering under the local connectivity lens would be one that can deepen local connectivity with limited impact on other metrics of clustering quality; this, we will see, is the focus of our empirical evaluation. 5.2 LOFKM: The Method In line with the idea of bringing cluster prototypes closer to higher LOF data objects, we start with assigning a weight to each data object, as follows: \ud835\udc4a(\ud835\udc4b) = ( 1.0 \ud835\udc3f\ud835\udc42\ud835\udc39(\ud835\udc4b) \u22641 \ud835\udc3f\ud835\udc42\ud835\udc39(\ud835\udc4b) \ud835\udc5c\ud835\udc61\u210e\ud835\udc52\ud835\udc5f\ud835\udc64\ud835\udc56\ud835\udc60\ud835\udc52 (10) \ud835\udc4a(\ud835\udc4b) is simply the LOF score bounded under by 1.0. This weight is then used in re-formulating the standard \ud835\udc3e-Means objective as follows, for a given clustering C over the dataset: \u2211\ufe01 \ud835\udc36\u2208C \u2211\ufe01 \ud835\udc4b\u2208\ud835\udc36 \ud835\udc4a(\ud835\udc4b) \u00d7 \u0012 \u2211\ufe01 \ud835\udc34\u2208A (\ud835\udc4b.\ud835\udc34\u2212\ud835\udc36.\ud835\udc34)2 \u0013 (11) where \ud835\udc34is any attribute from the set of attributes A, with \ud835\udc4b.\ud835\udc34 and \ud835\udc36.\ud835\udc34denoting the value taken for the attribute by the object \ud835\udc4b and the cluster prototype of cluster \ud835\udc36respectively (notice that we have overloaded \ud835\udc36to denote both the cluster and its prototype for notational simplicity). Intuitively, this is equivalent to considering the dataset as comprising each object as being replicated as many times as its LOF score requires, and applying standard \ud835\udc3e-Means over the enlarged dataset. There are two sets of variables that we can change in order to optimize for the objective; the cluster memberships and cluster prototypes. Standard \ud835\udc3e-Means optimizes these in turn (keeping one set fixed, and optimizing for the other) over many iterations until the cluster memberships stabilize. \fLocal Connectivity in Centroid Clustering IDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea Name # Instances # Attributes # Classes Yeast 1484 8 10 Wireless6 2000 7 4 Avila 20867 10 12 Table 1: Dataset Statistics Under the objective in Eq 11, the membership assignment step, given the cluster prototypes, is as follows: \u2200\ud835\udc4b\u2208X, \ud835\udc36\ud835\udc59\ud835\udc62\ud835\udc60\ud835\udc61\ud835\udc52\ud835\udc5f(\ud835\udc4b) = arg min \ud835\udc36\u2208C \u2211\ufe01 \ud835\udc34\u2208A (\ud835\udc4b.\ud835\udc34\u2212\ud835\udc36.\ud835\udc34)2 (12) Since we are updating each object independently given the current estimate of cluster prototypes, \ud835\udc4a(\ud835\udc4b) does not factor into this cluster assignment step since it is simply a constant factor for each \ud835\udc4bindependent of which cluster \ud835\udc4bgets assigned to. This, as one may notice, is exactly the cluster assignment step in \ud835\udc3e-Means. It may sound odd as to why we critique the \ud835\udc3e-Means membership desert and still use it in LOFKM; the crucial factor here is that this proximity-based membership desert is used against a set of cluster prototypes that are estimated in very sharp contrast to the analogous step in \ud835\udc3e-Means. The LOFKM cluster prototype estimation step is as follows: \u2200\ud835\udc34\u2208A, \ud835\udc36.\ud835\udc34= \u00cd \ud835\udc4b\u2208\ud835\udc36\ud835\udc4a(\ud835\udc4b) \u00d7 \ud835\udc4b.\ud835\udc34 \u00cd \ud835\udc4b\u2208\ud835\udc36\ud835\udc4a(\ud835\udc4b) (13) In other words, each \ud835\udc4bis accounted for as many times as warranted by \ud835\udc4a(\ud835\udc4b). Towards generating a clustering from a dataset, much like in \ud835\udc3e-Means clustering, we start with a random initialization of cluster prototypes followed by iteratively applying Eq 12 and Eq 13 until the cluster memberships become relatively stationary across iterations. Owing to these steps mirroring those of standard \ud835\udc3e-Means, we do not outline a full pseudocode for LOFKM herewith. 5.2.1 Note on Complexity. The \ud835\udc3e-Means steps, much like the usual \ud835\udc3e-Means algorithm, is linear in the number of objects, number of clusters and number of attributes. However, computing the weights, i.e., Eq 10, is more expensive. While LOF computation is generally regarded as between superlinear and quadratic in the number of objects [7], faster methods have recently been proposed [3, 14]. It is notable that any further advancements in improving LOF computations readily transfer over to LOFKM as well, given that the LOF and \ud835\udc3e-Means steps are decoupled within LOFKM. 6 EXPERIMENTAL EVALUATION We now describe our empirical evaluation. We start by outlining the datasets and baselines in our empirical evaluation, while also outlining the evaluation setup. This is followed by detailed results from empirical evaluation and analyses. 6.1 Datasets, Baselines and Evaluation Setup 6.1.1 Datasets. We evaluate our methods on multiple real-world datasets from the UCI Machine Learning Repository. These have widely different numbers of objects, ranging from 1.5\ud835\udc58to 21\ud835\udc58, and spread across 4 \u221212 classes. The dataset statistics are summarized in Table 1. 6.1.2 Baseline. Much like the only two existing papers that propose new normative considerations in clustering, that of proportionality [8] and representativity [19], we use the classical \ud835\udc3e-Means formulation as the baseline method in our experimental evaluation. We do not include either of the above methods in our comparison since they optimize for significantly different notions of membership desert; as an example, it may be seen that the method from [8] was used in the empirical evaluation for representativity in [19], and it was observed (unsurprisingly) that the basic \ud835\udc3e-Means fared much better than [8] on representativity. 6.1.3 Evaluation Setup. We follow the evaluation framework for fair clustering (as in [1, 19]), with the evaluation being conducted across two kinds of metrics; (i) local connectivity (analogous to fairness metrics in fair clustering) metrics, viz., AvgLCD and MaxLCD, and (ii) clustering quality metrics, viz., silhoutte [21] (Sil) and clustering purity7 (Pur). For LOFKM, we expect improvements on the former, and setbacks on the latter. LOFKM may be judged to be effective if it is able to achieve good gains on the former set of metrics, at reasonable detriment to the latter. For both LOFKM and \ud835\udc3e-Means, we average the performance metrics across 100 random starts, so as to achieve stable and reliable numbers. We always set the number of clusters in the output, i.e., the parameter \ud835\udc3e, to be equal to the number of classes in the respective datasets (Ref. Table 1). 6.2 Experimental Results and Analysis We first outline the structure of the experimental analysis. Local connectivity, as outlined in Sec 4.2, is assessed using a parameter \ud835\udc61, the number of relevant neighbors for an object; this parameter is used in the computation of both MaxLCD and AvgLCD. For LOFKM, there is a similar parameter in the input, which is the number of neighbors for an object used in LOF computation (Ref. Sec. 2.2). These, being similar in spirit, are set to identical numbers, and we denote both as \ud835\udc61. We experiment with varying values of \ud835\udc61; in the interest of brevity, we report results for \ud835\udc61\u2208{3, 4, 5} as a representative set of results since the trends held good for higher values. \ud835\udc3e\ud835\udc40, short for \ud835\udc3e-Means, does not use any neighborhood parameter in the method. The evaluation on fairness metrics is illustrated in Table 2 whereas the evaluation on clustering quality appears in Table 3. The percentage change of the LOFKM metric over that in KM is indicated explicitly, for ease of interpretation. An average of 5\u221210% gains are achieved on the AvgLCD measure, indicating a sizeable improvement in local connectivity in the clusterings output by LOFKM over those of KM. Further, the improvements are seen to improve with the size of the dataset, which is expected since larger datasets allow for more flexibility in clustering assignments. The corresponding improvements in MaxLCD are seen to be smaller. MaxLCD quantifies the worst local connectivity across clusters, and thus relates to the quantification over a single cluster, which in turn is the worst local connectivity across members of the cluster. While it would intuitively be expected that least locally connected objects which 7https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html \fIDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea Deepak P Dataset Method AvgLCD \u2193 MaxLCD \u2193 t = 3 t = 4 t = 5 t = 3 t = 4 t = 5 Yeast KM 0.93 1.18 1.42 1.20 1.57 2.00 LOFKM 0.92 1.08 1.15 1.15 1.53 1.98 Improvement % 01.07% 08.47% 09.01% 04.17% 02.55% 01.00% Wireless KM 1.24 1.68 2.04 1.32 1.79 2.24 LOFKM 1.18 1.56 1.90 1.31 1.73 1.95 Improvement % 04.83% 07.14% 06.87% 00.76% 03.35% 12.95% Avila KM 1.11 1.48 1.83 1.33 1.77 2.19 LOFKM 0.99 1.31 1.61 1.32 1.80 2.19 Improvement % 10.81% 11.49% 12.02% 00.75% -01.69% 00.00% Avg of Improvement % 05.57% 09.03% 09.30% 01.89% 01.40% 04.65% Table 2: Evaluation on Local Connectivity Measures. Note that lower values are better for both AvgLCD and MaxLCD, as indicated using the arrow in the column heading. Dataset Method Sil \u2191 Pur \u2191 t = 3 t = 4 t = 5 t = 3 t = 4 t = 5 Yeast KM 0.26 0.42 LOFKM 0.27 0.27 0.26 0.41 0.41 0.41 Change % +03.84% +03.84% 00.00% -02.40% -02.40% -02.40% Wireless KM 0.40 0.93 LOFKM 0.39 0.39 0.39 0.77 0.78 0.78 Change % -02.50% -02.50% -02.50% -17.20% -16.13% -16.13% Avila KM 0.15 0.46 LOFKM 0.18 0.18 0.18 0.45 0.45 0.45 Change % 20.00% 20.00% 20.00% -02.17% -02.17% -02.17% Avg of Change % 07.11% 07.11% 05.83% -07.26% -06.90% -06.90% Table 3: Evaluation on Clustering Quality Measures. Note that higher values are better for both Sil and Pur, as indicated using the arrow in the column heading. would be in sparse regions where local connectivity improvements would be harder to achieve, it is promising to note that LOFKM consistently achieves improvements on MaxLCD over Yeast and Wireless; the corresponding improvements in Avila are limited, and negative in one case. The trends on the clustering quality metrics in Table 3 may be regarded as quite interesting. It may be noted that \ud835\udc61does not play a role for results of KM since the clustering quality metrics as well as \ud835\udc3e\ud835\udc40are agnostic to \ud835\udc61. As outlined earlier, we expect that the cost of local connectivity enhancement in LOFKM would manifest as a deterioration in clustering quality. While we can observe such deterioration in Pur in Table 3, LOFKM is surprisingly able to achieve improvements in Sil on the Yeast and Avila datasets. On careful investigation, we found evidence to hypothesize that LOFKM discovers secondary clustering structures, which differ from the primary ones that are better correlated with external labels (Pur, as one might remember, measures correlation with external labels). These secondary clustering structures, while not necessarily tighter, are found to be well separated, yielding improvements in Sil. This interestingly correlates with similar observations over Sil in representativity fairness (Ref. Sec. 6.3.2 in [19]). In contrast to Yeast and Avila, Wireless does not seem to exhibit such well-separated secondary structures, leading to falls in both Sil and Pur. Across datasets, the deterioration in Pur is seen to be fairly limited, to within 10%; we would re-iterate that the fact that such deterioration comes with an improvement in Sil indicates the promisingness of LOFKM. To summarize, LOFKM is seen to offer consistent and often sizeable improvements in local connectivity, with mixed trends in clustering quality. 7 DISCUSSION Having considered local connectivity as a factor for membership desert in clustering, it is useful to think about how this relates to other notions and other factors that may be argued to play a role in membership desert. Local connectivity is distinctly different from representativity [19] in that an object that is very distant from the cluster prototype could still be locally connected to the very same cluster. While this conceptual distinction cannot be more apparent, in practice, we expect peripheral/fringe objects of a cluster to suffer from local connectivity, and similar could be true for representativity as well. In a way, local connectivity provides a way to distinguish between objects in the periphery of clusters that are locally connected to the cluster and those that are not. This points to the possibility of using both in tandem. Peripheral objects \u2019deserve\u2019 better representativity, but \fLocal Connectivity in Centroid Clustering IDEAS 2020, August 12\u201314, 2020, Seoul, Republic of Korea local connectivity could provide a way to prioritize among them. The connection with proportionality [8] is somewhat more nuanced, since proportionality violations are evaluated at the collection level. That said, proportionality violations may be expected to be in the gulf between existing clusters, since those would be the locations where one would expect to see preference to the existing cluster assignment waning. Thus, addressing proportionality violations by changing cluster assignments may be seen as automatically addressing local connectivity, since the objects would be better locally connected to the new cluster. These relationships between concepts could lead to interesting future explorations. Membership desert having been considered along lines of proximity [19], collective vote [8] and local connectivity, it is interesting to think of whether there are other ways of thinking about cluster memberships. The building blocks of Silhoutte [21] provide an interesting angle to the issue. Silhoutte quantifies the average distance to the objects of it\u2019s existing cluster, and those to the objects of the next nearest cluster, and uses these to compute a normalized difference, called the object-specific silhoutte co-efficient. The silhoutte score is then the mean8 of these. It may be argued that each object needs to be accorded a minimum level of higher proximity to the existing cluster than the next best, or that objects need to score similarly on their respective silhoutte co-efficients. This line of exploration requires low variance of the silhoutte co-efficients over the dataset, as well as maximizing the minimum silhoutte co-efficient. Another perspective is to consider the role of sensitive attributes such as race, sex, gender and religion, when clustering person-level data. Each of the notions of membership desert could be extended using the role of sensitive attributes. For example, there could be two routes to enhance membership desert based on the relationship with the cluster prototype. One could be through proximity, and another could be through similarity in sensitive attribute profiles, and these could compensate slightly for each other. This discussion hopefully serves to indicate that there is plentiful meaningful room for enhancing the diversity of membership desert notions in clustering formulation. A recent position paper [18] considers certain other normative possibilities within the task of clustering. 8" + }, + { + "url": "http://arxiv.org/abs/2010.07054v1", + "title": "Representativity Fairness in Clustering", + "abstract": "Incorporating fairness constructs into machine learning algorithms is a topic\nof much societal importance and recent interest. Clustering, a fundamental task\nin unsupervised learning that manifests across a number of web data scenarios,\nhas also been subject of attention within fair ML research. In this paper, we\ndevelop a novel notion of fairness in clustering, called representativity\nfairness. Representativity fairness is motivated by the need to alleviate\ndisparity across objects' proximity to their assigned cluster representatives,\nto aid fairer decision making. We illustrate the importance of representativity\nfairness in real-world decision making scenarios involving clustering and\nprovide ways of quantifying objects' representativity and fairness over it. We\ndevelop a new clustering formulation, RFKM, that targets to optimize for\nrepresentativity fairness along with clustering quality. Inspired by the\n$K$-Means framework, RFKM incorporates novel loss terms to formulate an\nobjective function. The RFKM objective and optimization approach guides it\ntowards clustering configurations that yield higher representativity fairness.\nThrough an empirical evaluation over a variety of public datasets, we establish\nthe effectiveness of our method. We illustrate that we are able to\nsignificantly improve representativity fairness at only marginal impact to\nclustering quality.", + "authors": "Deepak P, Savitha Sam Abraham", + "published": "2020-10-11", + "updated": "2020-10-11", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY", + "cs.AI", + "cs.LG" + ], + "main_content": "INTRODUCTION Clustering is a classical unsupervised learning task [12] that seeks to organize a dataset of objects in groups, such that objects that fall within the same group are more similar to each other than those that belong to different groups. Among the most popular formulations for clustering, inspired by the classical \ud835\udc3e-Means algorithm [17], is that of centroid clustering. Such algorithms, in addition to grouping data objects into clusters, offer a representative prototype for each cluster. Within the classical \ud835\udc3e-Means formulation, the representative prototype for a cluster is simply the centroid of objects within the cluster. In other similar formulations such as \ud835\udc3e-medoids [20], the representative could be the medoid, which is the object within a Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. WebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom \u00a9 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-7989-2/20/07...$15.00 https://doi.org/10.1145/3394231.3397910 cluster whose average dissimilarity to all the objects in the cluster is minimal. \ud835\udc3e-Medoids may be more appropriate when the usage scenario is better suited towards looking at an actual object rather than a hypothetical object that is located at the centroid. In both these cases as well as several others, the target is to produce clusters along with a representative for each cluster that is how centrally located within the cluster. Consider a pragmatic way of using clustering within a real-world scenario as follows. For an oversubscribed job vacancy where it is infeasible to scan through each job application manually, clustering offers an easy way out. The employer would cluster these job applications into a moderate number of clusters, followed by looking at each cluster representative, make a decision on suitability (e.g., reject, shortlist or scrutinize further), and apply that decision to all objects within the respective cluster. In retail, a similar sequence, clustering of customers followed by manual appreciation of the cluster representatives, would aid identifying suitable customer clusters to offer a sales promotion to. Given that advanced data collection methods severely outpace any efforts on manually labelling each object, clustering benefits a plethora of modern scenarios involving large datasets. In fact, it is also very useful for simpler scenarios such as facility location. For example, a large retail company venturing into a new city could cluster potential customer data using their geo-locations and open branches at each cluster centroid to ensure proximity to potential customers. Across these myriad scenarios, the cluster representative would be consistently used to inform a decision that would be applied to all objects within the cluster. Looking back at the job screening scenario, consider a cluster that involves a large and diverse set of job applications. Large clusters are inevitable when the desired number of output clusters are much fewer than the total number of applications, as would often be the case (given the need to speed-up the job screening process). For a large cluster, some applications would inevitably be very close to the cluster representative, whereas other applications would be much further away. A judgement or decision made through inspecting an object is intuitively likely to be more suited to objects that are very similar to it than to objects that are not as similar. Given our setting where judgements and decisions are based on perusing the cluster representative, the judgement arrived at is likely to be better suited for applications very similar to the cluster representative, and are likely to be much less suitable for those that are much further away. Despite this disparity in suitability with varying similarities to the cluster representative, the same decision is applied to all job applications given the structure of the process. This opens up a frontier of potential unfairness in the process, which we call as representativity (un)fairness, since some applications are accorded judgements that are more suited than for others. It may be noted that this is directly influenced by the arXiv:2010.07054v1 [cs.CY] 11 Oct 2020 \fWebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom Deepak P and Savitha Sam Abraham usage of the clustering technique than in the manual aspects of the process, and is thus best addressed within the clustering itself. 1.1 Clustering on the Web We have used job screening as a scenario to motivate representativity fairness primarily since this scenario has been the subject of much fairness oriented legislation1. However, the notion of fairness in general, as well as representativity fairness, are pertinent in any scenario involving exploratory data analysis using clustering methods. Web has emerged, over the past decades, as a rich source of (unlabelled) data. Thus, the web likely presents the largest set of scenarios involving exploratory analyses of data. Each user on the web leaves different cross-sections of digital footprints in different services she uses, together encompassing virtually every realm of activity. These data sources are so huge that no manual labelling efforts can keep pace with their growth. These service providers have an interest to perform exploratory analytics via clustering (clustering of mobility trajectories [25], for example) over consumer data for scenarios such as (i) understanding customer behavior to inform long-term corporate strategies, (ii) deliver personalized promotions and do better customer targeting for new services/products, and (iii) nudge individual users towards behavioral patterns that better suit their interests. In these scenarios, users who end up at the fringes of their assigned cluster, and thus are accorded low representativity, stand to be disadvantaged when decisions are made for them on the basis of their (distant and thus significantly dissimilar) cluster representative. These could induce a spectrum of scenarios, such as being offered irrelevant personalized offers or unsuitable sales promotions, to more consequential ones such as being offered an unfavorable car insurance quote. In the public sector, this could lead to more serious consequences. For example, users who end up on the perimeter of a cluster whose representative is considered typical of \u2019suspicious behavior\u2019 may be shortlisted for needless additional surveilance and/or pro-active checks. In short, it is easy to see how the massive amounts of data collected over the web infrastructure forms a fertile ground for clustering and exploratory analytics tasks, and why representativity could become a serious concern in scenarios within them. 1.2 Our Contribution In this work, we develop a novel measure of fairness for the representative based clustering paradigm used across the \ud835\udc3e-Means inspired family of algorithms. In sharp contrast to a recent such work [6] that considers unfairness at the level of object groups (they call those as coalitions), our new notion of fairness, that of representativity fairness, is based on (an aggregate of) individual object-level assessments. We demonstrate how representativity fairness can be facilitated within the clustering process itself through simple illustrative examples, and outline a number of evaluation measures to quantify representativity fairness of a clustering. We propose a clustering formulation and method to improve representativity fairness within clustering outputs. Through an empirical evaluation over multiple datasets, we illustrate the empirical effectiveness 1Examples include Title VII of the Civil Rights Act of 1964 (US), Uniform Guidelines on Employee Selection Procedures 1978 (US) of our approach in generating clusters with significant improvements in representativity fairness, while only suffering marginal degradations in clustering quality over existing methods. 2 RELATED WORK We now briefly summarize a selection of some recent literature on; (1) fairness in machine learning, and (2) fair clustering algorithms. 2.1 Fairness in Machine Learning Fairness in machine learning has seen tremendous amounts of research interest over the past several years. The vast majority of fair ML work has focused on supervised learning, especially on classification [11, 26]. Among tasks outside the traditional realm of supervised learning, fairness has been explored in retrieval [27], clustering [7] and recommendation scenarios [19]. Fairness in machine learning may be conceptualized using a number of different and mutually conflicting normative considerations [15]. Two streams of notions, as introduced in [10], are those of individual fairness and group fairness. Individual fairness is focused on consistent treatment and strives to achieve configurations where similar objects are assigned similar outcomes. Group fairness, on the other hand, looks to ensure that outcomes be equitably distributed across data subgroups defined on sensitive attributes such as gender, race, ethnicity, nationality and religion. In other words, individual fairness may be regarded as focusing on the process whereas group fairness is evaluated on the outcome. As Sen alludes to in [22], these relate to the ideas of niti and nyaya respectively in classical Indian jurisprudence. Though individual and group fairness have been traditionally treated as distinct and conflicting within work on fair machine learning, this dichotomy has been subject to some recent criticism (refer [5]). 2.2 Fair Clustering Most work on fair clustering has focused on group fairness. Fair clustering algorithms endeavor to ensure some form of representational parity across groups defined on sensitive attributes in the output clusters. Sensitive attributes could include gender, ethnicity, nationality, religion or even age and relationship status when dealing with people datasets; broadly, any attribute on which fairness is sought to be achieved could be treated as a sensitive attribute. As a concrete example, when considering a single protected attribute, say gender, the group-fair clustering task is to ensure that each cluster has a gender ratio that is either identical to, or close enough to, the gender ratio across the whole dataset (or alternatively, a pre-specified ideal gender ratio). If such representational parity is achieved, downstream applications making use of the outputs of the clustering are likely to treat the groups defined on the protected attribute fairly. Techniques differ on whether group fairness ensuring mechanisms are modelled at the pre-processing stage (e.g., [7]), optimization model (e.g., [1]) or as a post-processing step (e.g., [4]). Another facet of classifying fair clustering techniques is based on the flexibility to accommodate different numbers and types of sensitive attributes. While some techniques are designed to accommodate a single binary sensitive attribute [7, 18], some others can accommodate a single multi-valued attribute [2, 16]. There have also been techniques that can accommodate multiple \fRepresentativity Fairness in Clustering WebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom sensitive attributes simultaneously [1, 4]. A third facet of viewing fair clustering literature is whether the objective is to achieve theoretical fairness bounds [7], or achieve empirical fairness improvements [1, 28]. Fairness in clustering outside the framework of fairness over protected groups, such as the task we address in this paper, has been subject to limited exploration. In fact, to our best knowledge, there has been only one prior work in this direction. This recent work [6] proposes a notion of proportionality fairness in clustering. Using the intuitive assumption that individuals prefer to be closer to their cluster representative (which we will also use in our formulation), the authors of [6] define a new concept of proportional clusterings. Under their definition, a clustering solution may be regarded as proportional if there does not exist any set of at least \u0006\ud835\udc5b \ud835\udc58 \u0007 data points (\ud835\udc5b is the number of data objects in the dataset, and \ud835\udc58is the number of clusters) each of which would prefer the same particular data point to be their cluster representative, in preference to their currently assigned (separate) cluster representatives. This notion is extended to multiples of \u0006\ud835\udc5b \ud835\udc58 \u0007 points as well. The proportionality notion disallows any group of \u0006\ud835\udc5b \ud835\udc58 \u0007 proximal points to be split across multiple clusters even if it benefits the dataset-wide optimization; in a way, this is so since they are considered entitled to their own cluster. The authors illustrate that proportional clustering solutions may not always exist, propose a notion of approximate proportionality, and provide algorithms that can achieve theoretical guarantees of approximate proportionality in the output clusterings. While proportionality is built upon the same basic intuition of the assumed preference of data objects to be proximal to their cluster representative, it significantly differs from our notion of representativity fairness. Being closest to our task in spirit, we use this method as a baseline in our experiments. 3 REPRESENTATIVITY FAIRNESS We now outline the novel notion of fairness that we consider in this paper, that of representativity fairness. We discuss quantifying representativity and fairness over it, and outline representativity fairness enhancement by means of illustrative examples. 3.1 Quantifying Representativity Representativity of a data object within a specified clustering is the extent to which the data object is represented by the cluster representative corresponding to the cluster to which it is assigned. The clustering process makes use of a similarity measure between objects as a fundamental building block towards building clusters and cluster representatives. Thus, as a natural fallout, we also use similarity metrics to quantify representativity. Accordingly, the extent to which a data object is represented by it\u2019s cluster representative is simply the similarity of the object to the cluster representative. In other words, it is inversely related to the dissimilarity of the object to the cluster representative. The dissimilarity of an object to it\u2019s assigned cluster representative may be seen as the cost incurred by the object due to the cluster-level abstraction provided by the clustering. 3.2 Quantifying Representativity Fairness Our notion of representativity fairness is rooted on the concept of egalitarianism, and seeks to achieve egalitarianism on representativity. Thus, we would prefer clusterings where objects fare equally well on representativity. In other words, an ideal configuration for representativity fairness would be the case where all objects are equidistant from their respective cluster representatives. This enforces that all objects should live on the surface of equal-sized hyperspheres centered on their respective cluster representatives. This is evidently an infeasible scenario for many datasets since there may not exist \ud835\udc58cluster representativies where all data objects live on the surface of the equal-radius hyperspheres centered on them. Thus, we need to be able to quantify clusterings based on the extent to which they adhere to the notion of representativity fairness. Consider a dataset X = {. . . ,\ud835\udc4b, . . .} and a clustering C = {. . . ,\ud835\udc36, . . .} where \ud835\udc36represents a cluster. Let \ud835\udc45(\ud835\udc36) represent the representative of cluster \ud835\udc36, and C(\ud835\udc4b) denote the cluster to which \ud835\udc4bbelongs under the clustering C. Thus, the representativity of objects in X under the clustering C is given by the set/distribution: R(X, C) = { \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b))) | \ud835\udc4b\u2208X } (1) Our intent, given our target of egalitarianism, is to ensure that the values within R(X, C) are as even as possible. A natural first way to quantify this is by means of the variance of the distribution: \ud835\udc49\ud835\udc4e\ud835\udc5f(R(X, C)) = 1 |X| \u2211\ufe01 \ud835\udc4b\u2208X \u0012 R[\ud835\udc4b] \u2212\ud835\udc4e\ud835\udc63\ud835\udc54{R[\ud835\udc4b])|\ud835\udc4b\u2208X} \u00132 (2) where R[\ud835\udc4b] is a shorthand for \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b))). The more representativity fair a clustering is, the lower the value of\ud835\udc49\ud835\udc4e\ud835\udc5f(R(X, C)). Resource allocation in distributed systems has a similar structure as representativity \u2019allocation\u2019 in clustering, and a fairness notion that was developed for the latter [13] is intuitively appealing and appropriate for our setting. The measure, often referred to as the Jain measure from the name of the first author, offers a score in the range (0, 1] with higher values indicating higher fairness: \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b(R(X, C)) = \u0012 \u00cd \ud835\udc4b\u2208X R[\ud835\udc4b] \u00132 |X| \u00d7 \u00cd \ud835\udc4b\u2208X(R[\ud835\udc4b])2 (3) For a perfectly uniform distribution (say, {2, 2, 2}, across three objects), the numerator and denominator both evaluate to the same value (in this case, 36), yielding a \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bof 1.0. Any deviations from perfect uniformity with the same sum/budget do not matter to the numerator (since it is a function of the sum), but increase the denominator value, thus causing \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bto drop from 1.0 to lower values, approaching 0.0 for highly asymmetric distributions over large |X| settings. It may be noted that variance or \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bdo not capture the absolute values of R[\ud835\udc4b]s, but simply the uniformity. Thus, there could be cases where a low variance is achieved within a configuration where the cluster representative is very far away from all cluster members. In view of preventing such undesirable cases, we would additionally want to consider the average of the R(X, C) as an evaluation measure. Note that \ud835\udc34\ud835\udc63\ud835\udc54(which is essentially the normalized sum) is the objective that many clustering algorithms directly or indirectly try to optimize for. Turning our attention back \fWebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom Deepak P and Savitha Sam Abraham Figure 1: Single Cluster Example (best viewed in color) Object Distance to Distance to Black (2.5,2) Grey (3,2) (2,1) 1.12 1.41 (1,2) 1.50 2.00 (2,3) 1.12 1.41 (5,2) 2.50 2.00 \ud835\udc34\ud835\udc63\ud835\udc54 1.56 1.71 \ud835\udc40\ud835\udc4e\ud835\udc65 2.50 2.00 \ud835\udc49\ud835\udc4e\ud835\udc5f 0.43 0.11 \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b 0.88 0.97 Table 1: Analysis of Figure 1 to variance, quantifying representativity fairness using variance incentivizes moving towards what is often understood as strict egalitarianism2 on representativity which penalizes deviations on both directions from the mean equally. Thus, a clustering that penalizes a small minority of points\u2019 representativity for higher representativity for a large majority could still fare reasonably well on variance, \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5band average. Theories of justice have, over the decades, developed notions that prefer some deviations from strict egalitarianism over others. One example is a philosophy called luck egalitarianism [3] which argues that inequalities be justified as long as they benefit people who are victims of bad luck. A simpler and high-level philosophy put forward in a classical work by Rawls [14] that has come to be known as the difference principle suggests that inequalities be arranged to the greatest benefit of the least advantaged. Reflections of this Rawlsian position are also found in Gandhian thought and the Indian constitution [23]. Inspired indirectly by these, we consider the representativity of the object accorded least representativity (i.e., highest distance from cluster representative) as another complementary measure to evaluate representativity fairness: \ud835\udc40\ud835\udc4e\ud835\udc65(R(X, C)) = \ud835\udc5a\ud835\udc4e\ud835\udc65{ R[\ud835\udc4b] | \ud835\udc4b\u2208X } (4) The lower the values of each of\ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc40\ud835\udc4e\ud835\udc65(while keeping \ud835\udc34\ud835\udc63\ud835\udc54 low as well) and higher the value of \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b, the more representativity fair the clustering would be. As noted earlier, a typical fairnessagnostic clustering algorithm such as classical \ud835\udc3e-Means would be expected to naturally optimize for \ud835\udc34\ud835\udc63\ud835\udc54; thus, a fairness-conscious algorithm (such as the one we develop in this paper) would be expected to trade-off \ud835\udc34\ud835\udc63\ud835\udc54while seeking to achieve lower values on \ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc40\ud835\udc4e\ud835\udc65and correspondingly higher values on \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b. 2https://plato.stanford.edu/entries/justice-distributive/#Strict Figure 2: Two Clusters Example (best viewed in color) 3.3 Representativity Fairness: Examples We now motivate, by means of examples, as to how representativity fairness could be facilitated by varying (i) construction of cluster representatives, or (ii) cluster memberships of data objects. The method that we develop in a later section targets to enhance representativity fairness by affecting both kinds of variations. 3.3.1 Varying Cluster Representatives. Consider a single cluster comprising the blue objects/points in Fig 1, with the data point co-ordinates themselves listed in Table 1. The centroid of the blue points, which also corresponds to the best estimate to minimize \ud835\udc34\ud835\udc63\ud835\udc54for the cluster, is represented by the black point (at (2.5, 2) as outlined in Table 1). The R[\ud835\udc4b] values when considering the black point as the cluster representative is indicated in the second column along with corresponding \ud835\udc34\ud835\udc63\ud835\udc54, \ud835\udc40\ud835\udc4e\ud835\udc65, \ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bvalues. While being the centroid of all blue points, it may be noted that the black point offers much lower representativity to the blue point on the far right, given the configuration of the other blue points. Now, consider an alternative cluster representative for the same set of blue points, indicated by the grey point (at (3, 2)). The R[\ud835\udc4b] values as well as \ud835\udc34\ud835\udc63\ud835\udc54, \ud835\udc40\ud835\udc4e\ud835\udc65,\ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bare indicated in the third column in the table. It is easy to note that changing the cluster representative from the black point to the grey point improves the representativity of the blue point in the far right, by enhancing the proximity of the cluster representative to itself. This is reflected in the analysis in Table 1 that offers a comparative perspective between the two settings for cluster representatives, with the better value in each row indicated in boldface. The choice of the grey point as a cluster representative is seen to offer better values of \ud835\udc40\ud835\udc4e\ud835\udc65, \ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b with improvements of 20%, 74% and 10% respectively as compared to the choice of the black point, while trailing the latter on the \ud835\udc34\ud835\udc63\ud835\udc54 measure by 10%. The grey point also enables achieving a very high \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bvalue, very close to the upper bound of 1.0. This illustrates that representativity fairness may be enhanced by deviating from the \ud835\udc3e-Means paradigm of a centrally located cluster representative. 3.3.2 Varying Cluster Memberships. We now use another example to illustrate how representativity fairness can be facilitated by varying cluster memberships. Towards illustrating that this is independent of Section 3.3.1, we will stick with the \ud835\udc3e-Means paradigm of centroid as cluster representative for this example. Figure 2 shows a dataset clustered in two different ways, with cluster membership indicated using color coding; all points, blue and red, are data points, with cluster centroids represented using a black ring. In \fRepresentativity Fairness in Clustering WebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom Left Right R[(1, 1)] 1.06 0.47 R[(1, 2)] 0.79 0.74 R[(2, 1)] 0.79 0.74 R[(3, 3)] 1.77 1.41 R[(5, 5)] 0.00 1.41 \ud835\udc34\ud835\udc63\ud835\udc54 0.88 0.96 \ud835\udc40\ud835\udc4e\ud835\udc65 1.76 1.41 \ud835\udc49\ud835\udc4e\ud835\udc5f 0.40 0.19 \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b 0.71 0.86 Table 2: Analysis of Figure 2 the clustering on the left, all but one data point is part of the blue cluster, and the lone element on the top right is in a red cluster (since the red cluster centroid would overlap with that data point, it is not shown to avoid confusion). The figure on the right has a different configuration for the two clusters, one where the top right point is joined in the red cluster by another point (with the red cluster\u2019s centroid at the midpoint between the two points). An analysis, similar to earlier, follows on Table 2. While the left configuration, a potential stable \ud835\udc3e-Means output, scores better than the right configuration on the \ud835\udc34\ud835\udc63\ud835\udc54measure by around 9%, the right configuration comfortably beats the left on the \ud835\udc40\ud835\udc4e\ud835\udc65(by 20%), \ud835\udc49\ud835\udc4e\ud835\udc5f (by 53%) and \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b(21%) measures, and may rightly be regarded as being more adherent to representativity fairness. In fact, the right configuration also offers better representativity for 4 out of 5 points in the dataset. This example illustrates that representativity fairness can be facilitated by changes in cluster memberships, in addition to changing the cluster representative as seen earlier. 3.4 Positioning Representativity Fairness We now analyze representativity fairness within the context of the two streams of fairness, viz., individual and group fairness, as outlined in Section 2. Our definition of representativity fairness, due to not using the notion of protected groups, may be considered as being unrelated to group fairness. However, it does deviate from the general notion of individual fairness notably. As an example, consider the object (3, 3) in Section 3.3.2. It\u2019s proximity to each of {(1, 1), (1, 2), (2, 1)} is equal to or better than it\u2019s proximity to (5, 5). Yet, assigning it to the same cluster as (5, 5), in sharp dsiagreement to the \u2018similar objects be assigned similar outcomes (i.e., cluster memberships)\u2019 tenet of individual fairness, yields better representativity fairness to the overall clustering. In a way, representativity fairness incentivizes outcomes that are better for those objects that are disadvantaged in terms of their neighborhood density. We believe that neighborhood density disadvantage would also be correlated with minority/non-mainstream groups, and thus, we expect representativity fairness to be imbibing a flavor of group fairness in practice despite not using groups within the definition. The connection between proportionality [6] and representativity fairness is more nuanced. Proportionality is focused towards being fair to those objects who collectively prefer a new cluster representative; thus, violations of proportionality occur more on the fringes of clusters much like where objects disadvantaged on representativity also appear. However, there are sharp contrasting elements between the two notions due to being motivated by different scenarios. Within the framework of representativity fairness, individual cluster members can cause deterioration in representativity fairness based on how far they are positioned from the cluster representative. On the other hand, no proportionality violations are regarded as taken place until a strong enough collective voice (note that the cardinality of the set is an important consideration in proportionality) emerges from across the fringes of multiple neighboring clusters; thus, proportionality does not necessarily prevent an individual object from having very low representativity. Thus, representativity fairness accounts for quasi-outliers whereas proportionality does not bear that flavour. 4 PROBLEM DEFINITION As outlined in Section 3.2, let X = {. . . ,\ud835\udc4b, . . .} be a dataset of objects which are defined over a set of attributes A = {. . . ,\ud835\udc34, . . .} that are deemed to be pertinent for the clustering task. Much like in the \ud835\udc3e-Means family of methods, we model the distance between any two objects defined over A as: \ud835\udc51(\ud835\udc4b,\ud835\udc4c) = \u2211\ufe01 \ud835\udc34\u2208A \u0000\ud835\udc4b[\ud835\udc34] \u2212\ud835\udc4c[\ud835\udc34]\u00012 (5) where \ud835\udc4b[\ud835\udc34] is the value that object \ud835\udc4btakes on attribute \ud835\udc34. \ud835\udc51(., .) is the square of the euclidean distance, and thus, is monotonically related to the euclidean distance, which we denote as \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(., .) as in Section 3.2. The task of clustering is that of partitioning X into a prespecified number of clusters or groups, denoted as C = {. . . ,\ud835\udc36, . . .}; as in convention in literature, we use \ud835\udc58to denote the pre-specified number of clusters to be formed in the output. The clustering C is expected to be comprehensive over X, with each object \ud835\udc4bbeing assigned a unique cluster, denoted by C(\ud835\udc4b). Additionally, we expect each cluster to be associated with a cluster representative, modeled as an object over the same space of attributes A. As outlined in Section 3.2, we use \ud835\udc45(\ud835\udc36) to denote the representative for the cluster \ud835\udc36\u2208C. For ease of reference, we use the term clustering to refer to both the grouping provided by C as well as the set of cluster representatives associated with the clusters in C. The conventional task of clustering targets to achieve a grouping that maximizes intra-cluster similarity and minimizes inter-cluster similarity, similarity being inversely related to the distance as defined above. The task of representativity fair clustering, on the other hand, intends to obtain a clustering that yields better values on \ud835\udc49\ud835\udc4e\ud835\udc5f(Eq. 2), \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b(Eq. 3) and \ud835\udc40\ud835\udc4e\ud835\udc65(Eq. 4) measures as evaluated on the representativity vector R(X, C) (Eq. 1). While gains on this is expected to be achieved at the cost of deterioration in the \ud835\udc34\ud835\udc63\ud835\udc54, keeping such deterioration small would be considered better. 5 RFKM: OUR METHOD We now describe our proposed method for representativity fairness in clustering, which we call RFKM to stand for both Rrepresentativity Fairness and K-Means, the latter being the method that it draws inspiration from. RFKM incorporates two novel loss terms that collectively incentivize reducing variability among representativity of objects within the output clustering as well as towards ensuring attention towards objects that are accorded the least representativity. The RFKM objective function is as follows: \fWebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom Deepak P and Savitha Sam Abraham O = \u2211\ufe01 \ud835\udc4b\u2208X \ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b))) + \ud835\udf061 \u00d7 \u2211\ufe01 \ud835\udc4b\u2208X \u0000\ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b)))\u00012 + \ud835\udf062 \u00d7 \ud835\udc5a\ud835\udc4e\ud835\udc65{\ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b)))|\ud835\udc4b\u2208X} (6) This objective function has three components. The first term is the usual \ud835\udc3e-Means objective that computes the loss based on the sum of each data object\u2019s dissimilarity/distance to its cluster representative; given that higher distances offer lower representativity, we refer to these loss terms as object-level representativity loss. The second term is modelled as the sum of squares of the object-specific representativity losses. The squaring operation amplifies the larger losses more than smaller ones, and thus optimizing for the squared versions would help nudge the clustering towards those that minimize the larger losses. The third term is simply the representativity loss corresponding to the object that is farthest from its cluster representative. The overall objective is modelled as a weighted sum, with \ud835\udf061 and \ud835\udf062 being the weights for the second and third terms that may be set appropriately. As in \ud835\udc3e-Means, the overall loss is computed over a given clustering; the RFKM task is thus to identify a clustering that minimizes this overall loss. 5.1 Intuitive Basis of the Loss Terms The first loss term, being the classical \ud835\udc3e-Means loss, is more aligned with lowering \ud835\udc34\ud835\udc63\ud835\udc54and targets to lower the sum of the representativity losses. The second term deepens the attention within the optimization formulation towards those objects that have high representativity losses. This may be seen as offering a push towards lower \ud835\udc49\ud835\udc4e\ud835\udc5ffrom above downward; there is no corresponding term to push from below since we would ideally like to achieve low \ud835\udc49\ud835\udc4e\ud835\udc5fat low overall representativity losses as well, and the constraints from the geometry of the data offer a natural upward push. These would entail better distributional parity targeted at yielding higher \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b scores. The third term measures the representativity loss associated with the object that is placed farthest from its cluster representative, in the clustering. This is directly targeted towards optimizing for \ud835\udc40\ud835\udc4e\ud835\udc65. However, the second term also helps reducing \ud835\udc40\ud835\udc4e\ud835\udc65(and albeit less so, the first term too) since the farthest object also forms the largest term within the first and second summations. 5.2 The Optimization Approach Clustering is a dataset-wide optimization problem, and the \ud835\udc3e-Means formulation yields an NP-hard problem [24]. It follows that the RFKM objective is also NP-hard. Thus, much like the case of \ud835\udc3eMeans, we adopt an iterative optimization approach that, while susceptible to local minima, optimizes for the objective gradually across iterations. Notice that there are two sets of variables; (i) the cluster memberships, and (i) the cluster representatives. We adopt the high-level \ud835\udc3e-Means framework of optimizing for each of these in turn (which, as one may notice, correspond to the \ud835\udc38and \ud835\udc40steps of the classical expectation-maximization meta-algorithm [8]). One difficulty with the RFKM objective is the construction of the third term; we replace it with a differentiable approximation: \ud835\udc5a\ud835\udc4e\ud835\udc65{\ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b)))|\ud835\udc4b\u2208X} \u22481 \ud835\udf19\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc52 \u0012 \u2211\ufe01 \ud835\udc4b\u2208X \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\u00d7 \ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b)))) \u0013 (7) where \ud835\udf19is a sufficiently large positive number (say, 10). This approximation is easy to understand if one notices that the multiplication with \ud835\udf19and the exponentiation enlarge the largest representativity loss much more than the others (for larger \ud835\udf19, the larger the amplification). Thus, the sum of such enlarged losses are dominated by the largest loss, which is then recovered through the \ud835\udc59\ud835\udc5c\ud835\udc54(.) and division by \ud835\udf19. For smaller values of \ud835\udf19(say, close to 1), this approximation would exceed the max, as it may be intuitive to see; this construction allows for some control to amplify the importance of the largest representativity loss further. Thus, the overall objective may be re-written as: O = \u2211\ufe01 \ud835\udc4b\u2208X \ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b))) + \ud835\udf061 \u00d7 \u2211\ufe01 \ud835\udc4b\u2208X \u0000\ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b)))\u00012 + \ud835\udf062 \ud835\udf19\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc52 \u0012 \u2211\ufe01 \ud835\udc4b\u2208X \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\u00d7 \ud835\udc51(\ud835\udc4b, \ud835\udc45(C(\ud835\udc4b)))) \u0013 (8) While the summation over X offers convenient interpretation, the optimization is better understood if it is written equivalently as a summation over clusters, as follows: O = \u2211\ufe01 \ud835\udc36\u2208C \u2211\ufe01 \ud835\udc4b\u2208\ud835\udc36 \ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36)) + \ud835\udf061 \u00d7 \u2211\ufe01 \ud835\udc36\u2208C \u2211\ufe01 \ud835\udc4b\u2208\ud835\udc36 \u0000\ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36))\u00012 + \ud835\udf062 \ud835\udf19\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc52 \u0012 \u2211\ufe01 \ud835\udc36\u2208C \u2211\ufe01 \ud835\udc4b\u2208\ud835\udc36 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\u00d7 \ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36))) \u0013 (9) 5.2.1 Cluster Representative Updates. While just optimizing for the first term (as in \ud835\udc3e-Means) allows achieving a closed-form solution for estimating a new cluster representative given the cluster memberships, the inclusion of the other loss terms cause much dependencies. However, the first derivative can be equated to zero to give an iterative update formula as follows: \u2200\ud835\udc34\u2208A, \ud835\udc45(\ud835\udc36)[\ud835\udc34] = \u00cd \ud835\udc4b\u2208\ud835\udc36 \ud835\udc4b[\ud835\udc34] \u0012 1 + 2\ud835\udf061\ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36)) + \ud835\udf062\u00d7\ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\ud835\udc51(\ud835\udc4b,\ud835\udc45(\ud835\udc36))) \u00cd \ud835\udc36\u2032\u2208C \u00cd \ud835\udc4b\u2032\u2208\ud835\udc36\u2032 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\ud835\udc51(\ud835\udc4b\u2032,\ud835\udc45(\ud835\udc36\u2032))) \u0013 \u00cd \ud835\udc4b\u2208\ud835\udc36 \u0012 1 + 2\ud835\udf061\ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36)) + \ud835\udf062\u00d7\ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\ud835\udc51(\ud835\udc4b,\ud835\udc45(\ud835\udc36))) \u00cd \ud835\udc36\u2032\u2208C \u00cd \ud835\udc4b\u2032\u2208\ud835\udc36\u2032 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\ud835\udc51(\ud835\udc4b\u2032,\ud835\udc45(\ud835\udc36\u2032))) \u0013 (10) where \ud835\udc45(\ud835\udc36)[\ud835\udc34] denotes the value associated with the \ud835\udc34\ud835\udc61\u210eattribute of the cluster representative. It is easy to note that this equation is not in closed form, given that what is to be estimated, i.e., \ud835\udc45(\ud835\udc36), also appears on the RHS of the equation. It may further be seen that all cluster representatives, \ud835\udc45(\ud835\udc36\u2032), \u2200\ud835\udc36\u2032 \u2208C affect each other (note the denominator of the third term in the numerator as well as denominator). The intuitive appeal for Eq. 10 is evident \fRepresentativity Fairness in Clustering WebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom when it is viewed as estimating each cluster representative as a weighted sum of objects in the cluster with object-specific weights, with object-specific weights being directly related to: \u2022 the current representativity loss of the object; & \u2022 an amplified and normalized version of the current representativity loss of the object across all attributes. This construction drags the cluster representative closer to objects that are currently much farther away from itself. Relating this to the example in Figure 1, this would move the cluster representative from the position of the black object further towards the grey object due to the far right object being accorded much more weight due to it being far away as compared to the others. It may also help to note that for sufficiently large values of \ud835\udf19, the third term approximates as follows: \ud835\udf062 \u00d7 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\u00d7 \ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36))) \u00cd \ud835\udc36\u2032\u2208C \u00cd \ud835\udc4b\u2032\u2208\ud835\udc36\u2032 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\u00d7 \ud835\udc51(\ud835\udc4b\u2032, \ud835\udc45(\ud835\udc36\u2032))) \u2248 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \ud835\udf062 if \ud835\udc4bis the object that is most distant from its currently assigned cluster representative 0 otherwise (11) Thus, for every object other than the one with the highest representativity loss, the third term becomes negligible. 5.2.2 Cluster Assignment Updates. Each of the first two terms in the objective have one term corresponding to each object, with the third term relating to the entire clustering. Under the current estimates of cluster representatives, the cluster assignment can be varied to set it to what would yield the smallest value for the overall objective; this yields: C(\ud835\udc4b) = arg min \ud835\udc36\u2208C \u0012 \ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36))+ \ud835\udf061 (\ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36)))2 + \ud835\udf062T \ud835\udc38\ud835\udc51\ud835\udc56\ud835\udc61(X,C,\ud835\udc4b,\ud835\udc36) \u0013 (12) where T \ud835\udc38\ud835\udc51\ud835\udc56\ud835\udc61(X,C,\ud835\udc4b,\ud835\udc36) is the term 1 \ud835\udf19\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc52 \u0000 \u00cd \ud835\udc36\u2032\u2208C \u00cd \ud835\udc4b\u2032\u2208\ud835\udc36\u2032 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\u00d7 \ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36\u2032)))\u0001 evaluated over the clustering C of the dataset X, with a single change in cluster assignment, that of re-assigning\ud835\udc4bto\ud835\udc36; we do not change the cluster representative during the course of this reassignment. Recall from Eq. 7 that 1 \ud835\udf19\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc52 \u0000 \u00cd \ud835\udc36\u2032\u2208C \u00cd \ud835\udc4b\u2032\u2208\ud835\udc36\u2032 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udf19\u00d7 \ud835\udc51(\ud835\udc4b, \ud835\udc45(\ud835\udc36\u2032)))\u0001 is an approximation for the maximum representativity loss according to clustering C over X. Thus, the third term in Eq. 12, in effect, adds an incentive to favour cluster assignments that reduce the max representativity loss. In other words, the third term introduces significant resistance to cluster assignment updates that would increase the max representativity loss across the clustering and vice versa. While this does not follow that RFKM can cause a direct change from the left configuration to the right configuration in Figure 2, the RFKM objective scores 27% worse for the left configuration than the right (in contrast, the \ud835\udc3e-Means objective prefers the left configuration); this entails that RFKM would prefer to gravitate towards the right configuration as compared to the left one, across iterations. Alg. 1 RFKM Input. Dataset X, Attribute Set A, number of clusters \ud835\udc58 Hyper-parameters: \ud835\udf061 and \ud835\udf062, max iterations Output. Clustering C and associated cluster representatives 1. Initialize \ud835\udc58clusters 2. Set cluster representatives using Eq. 10 3. \ud835\udc64\u210e\ud835\udc56\ud835\udc59\ud835\udc52(\ud835\udc5b\ud835\udc5c\ud835\udc61\ud835\udc66\ud835\udc52\ud835\udc61\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc63\ud835\udc52\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc51\ud835\udc4e\ud835\udc5b\ud835\udc51\ud835\udc5a\ud835\udc4e\ud835\udc65. \ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5f\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc5b\ud835\udc5c\ud835\udc61\ud835\udc5f\ud835\udc52\ud835\udc4e\ud835\udc50\u210e\ud835\udc52\ud835\udc51) 4. \u2200\ud835\udc4b\u2208X, 5. Set C(X) using Eq. 12 6. Update cluster representatives as outlined in Eq. 10 8. Return Calong with the cluster representatives Name # Instances # Attributes # Classes Iris 150 4 3 Yeast 1484 8 10 Wireless3 2000 7 4 Avila 20867 10 12 Letter4 20000 16 26 Table 3: Dataset Statistics 5.3 The Overall Technique Having detailed the separate steps of the optimization process, we are now ready to summarize the overall approach. As outlined in Algorithm 1, RFKM starts with a random initialization of cluster memberships followed by iterative refinement alternating between re-estimating cluster memberships and cluster representatives. 5.3.1 Complexity. Let the number of objects, attributes, clusters and the maximum number of iterations be \ud835\udc5b, \ud835\udc5a, \ud835\udc58and \ud835\udc61respectively. Then, the complexity of RFKM is in O(\ud835\udc5b\ud835\udc5a\ud835\udc58\ud835\udc61), making it asymptotically as fast as \ud835\udc3e-Means. The key point to note is that within the cluster representative learning step in Eq. 10, the denominator in the third term of the object weight construction is independent of the choice of the current cluster, i.e., \ud835\udc36. Thus, it can be pre-computed before each cluster assignment separately, and used to weigh the contribution from each object within the cluster. Given the linear cost, RFKM compares favorably against recent fair clustering algorithms that are super-quadratic [7] and quadratic [1]. 6 EXPERIMENTAL EVALUATION We now describe the empirical evaluation of our method against existing clustering formulations. We start by describing the datasets and experimental setup, followed by baselines and evaluation measures. We present results of empirical analyses across a number of real-world datasets, and analyze the results across various facets. 6.1 Datasets and Experimental Setup We use a number of datasets from the UCI Machine Learning repository [9] for our empirical study. The usage of public datasets, we hope, will aid benchmarking and reproducibility. The details of the 3short for Wireless Indoor Localization 4short for Letter Recognition \fWebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom Deepak P and Savitha Sam Abraham datasets we employ in our study are listed in Table 3. These incorporate a wide variety of dataset sizes, ranging from 150 to 20\ud835\udc58, a wide range of attribute numbers (4 to 16) and a range of number of classes (3 to 26). Given that clustering is an unsupervised learning task, the class labels are only used for evaluation. In all cases, unless otherwise mentioned, we set the number of desired output clusters, i.e. \ud835\udc58, to the number of classes. This is intuitive since we are interested in analyzing whether clustering is capable of capturing the class-wise grouping of objects. The choice of datasets with varying sizes and attributes are intended to illustrate the generalizability of our empirical study. We set the value of \ud835\udf061 to 1.0; we will study the trends against varying values of \ud835\udf061 separately. The third term in Eq. 9 is an approximation of the max, and thus, this term would be expected to be quite small when compared with the first two that sum over all objects; accordingly, we set \ud835\udf062, its co-efficient to be |X| 10 to ensure it is well-accounted. We empirically observed that values for \ud835\udf19= 3 is sufficient to achieve a good approximation of \ud835\udc5a\ud835\udc4e\ud835\udc65, and thus, set it thus. 6.2 Baselines and Evaluation Measures 6.2.1 Baselines. Our main baseline is the recent fair clustering work [6] that also builds its fairness measure upon proximity and representation, albeit in a significantly different way. There are two techniques that they propose, which we will refer to as \ud835\udc3f\ud835\udc46 (for Local Search) and \ud835\udc3a\ud835\udc5f\ud835\udc52\ud835\udc52\ud835\udc51\ud835\udc66respectively, within our experimental analyses. Much like the empirical evaluation in [6], we also compare our approach to the \ud835\udc3e-means method (denoted as \ud835\udc3e\ud835\udc40). We also use the same heuristic for cluster initialization in line 1 of RFKM (refer Algorithm 1). Given the random initialization step in \ud835\udc3e\ud835\udc40and \ud835\udc45\ud835\udc39\ud835\udc3e\ud835\udc40, we consistently report the average results over 100 random initializations, for robustness. 6.2.2 Evaluation Measures. Much like the structure used in [1], we would like to evaluate the RFKM clusterings on two fronts; (i) representativity fairness, and (ii) clustering quality. As developed in Section 3.2, we will use \ud835\udc49\ud835\udc4e\ud835\udc5f, \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5band \ud835\udc40\ud835\udc4e\ud835\udc65for measuring representativity fairness. For clustering quality, we use the following measures: \u2022 Avg (for K-Means Objective): The \ud835\udc3e-Means objective measures the coherence of clusters by way of aggregating the distances of each object to its cluster representative. It may be noted that \ud835\udc34\ud835\udc63\ud835\udc54, the measure discussed in Section 3.2, is related to the per capita \ud835\udc3e-Means objective, i.e., \ud835\udc3e-Means objective normalized by the dataset size. It is also notable that \ud835\udc3e-Means objective is the only clustering quality evaluation measure used in [6]. \u2022 Silhouette Score (Sil): Silhouette [21] measures the separatedness of clusters, and quantifies a clustering with a score in [\u22121, +1], higher values indicating well-separated clusters. This was used in [1] as a clustering quality metric. \u2022 Clustering Purity (Pur): Yet another way to measure the quality of the clustering is to see how well it adheres to the manual labellings available in the dataset. Clustering purity5 is a popular measure that captures the alignment between clusters and dataset 5https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html labels: \ud835\udc43\ud835\udc62\ud835\udc5f(C, L, X) = 1 |X| \u2211\ufe01 \ud835\udc36\u2208C max \ud835\udc3f\u2208L |\ud835\udc36\u2229\ud835\udc3f| (13) It may however be noted that some of the datasets that we use are designed for classification benchmarking; Thus, we do not expect clustering methods to deliver very high purities over them. Still, the relative trends across the methods would offer a legitimate comparative perspective. It may be noted that higher values are desirable on \ud835\udc46\ud835\udc56\ud835\udc59, \ud835\udc43\ud835\udc62\ud835\udc5fand \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b, whereas lower values are desirable on all other measures. 6.3 Experimental Results We now analyze the comparative performance of RFKM against KM, LS [6] and Greedy [6] on the two fronts; representativity fairness and clustering quality. 6.3.1 Representativity Fairness. The representativity fairness evaluation appears on Table 4. As expected, RFKM consistently performs better than the competing techniques on each of \ud835\udc49\ud835\udc4e\ud835\udc5f, \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5band \ud835\udc40\ud835\udc4e\ud835\udc65. The per-measure aggregate improvements, the average of row-specific percentage improvements, are recorded at the bottom row. Between \ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc40\ud835\udc4e\ud835\udc65, the performance improvements are much higher for \ud835\udc40\ud835\udc4e\ud835\udc65as against \ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b. While RFKM is targeted to optimize for all three measures, it is easier to rein in the few high values in the R[.] vector than to reduce dispersion across all; this reflects in the high improvements recorded for \ud835\udc40\ud835\udc4e\ud835\udc65. Among \ud835\udc49\ud835\udc4e\ud835\udc5fand \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b, the latter has an upper bound of 1.0, and with some baselines, values being already beyond 0.80, there is \u2018not enough space\u2019 to improve, unlike the case of \ud835\udc49\ud835\udc4e\ud835\udc5f. In fact, RFKM records a 4.6 percentage point improvement on the \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bmeasure which is significant and substantial in those ranges. The improvements, while consistent, differ across datasets. The quantum of improvements are quite small for the \ud835\udc3c\ud835\udc5f\ud835\udc56\ud835\udc60dataset; this is likely because \ud835\udc3c\ud835\udc5f\ud835\udc56\ud835\udc60has just 150 data points spread across 4 attributes and 3 classes. This provides limited possibilities in arriving at alternative clusterings that optimize for representativity fairness while still retaining cluster coherence. That \ud835\udc3e\ud835\udc40fares ahead of \ud835\udc3f\ud835\udc46and \ud835\udc3a\ud835\udc5f\ud835\udc52\ud835\udc52\ud835\udc51\ud835\udc66 may be considered as an empirical indication that the notion of proportionality that \ud835\udc3f\ud835\udc46and \ud835\udc3a\ud835\udc5f\ud835\udc52\ud835\udc52\ud835\udc51\ud835\udc66use is reasonably different from the notion of representative fairness that we evaluate. 6.3.2 Clustering Quality Evaluation. As indicated in Section 3.2, we expect that clusterings that seek to advance representativity fairness are likely to take a hit on clustering quality metrics, given that there these criteria are not necessarily at harmony with each other. The clustering quality as evaluated over \ud835\udc34\ud835\udc63\ud835\udc54, \ud835\udc46\ud835\udc56\ud835\udc59and \ud835\udc43\ud835\udc62\ud835\udc5fare outlined in Table 5. True to expectations, RFKM records a better performance on these metrics only on a minority of scenarios; in particular, RFKM is the top performer on only 5 combinations6 out of 15 (5 datasets, 3 clustering quality measures). We will first analyze the performance on \ud835\udc34\ud835\udc63\ud835\udc54and \ud835\udc43\ud835\udc62\ud835\udc5f. RFKM is seen to lag 7.2% and 4.12% behind the next best performing method on the \ud835\udc34\ud835\udc63\ud835\udc54 and \ud835\udc43\ud835\udc62\ud835\udc5fmeasures respectively. It may however be noted that the next best performing method is not always the \u2019same\u2019 method; in certain cases, it is \ud835\udc3e\ud835\udc40and it is \ud835\udc3f\ud835\udc46and \ud835\udc3a\ud835\udc5f\ud835\udc52\ud835\udc52\ud835\udc51\ud835\udc66in certain other cases. That said, given that \ud835\udc3e\ud835\udc40is an overwhelming frontrunner (scoring 6RFKM is joint best on 2 out of those combinations. \fRepresentativity Fairness in Clustering WebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom Dataset Var \u2193 Jain \u2191 Max \u2193 Name LS Greedy KM RFKM LS Greedy KM RFKM LS Greedy KM RFKM Iris 0.23 0.27 0.11 0.11 0.75 0.69 0.80 0.81 2.42 2.56 1.66 1.63 Yeast 1.21E-2 1.63E-2 6.66E-3 6.26E-3 0.71 0.72 0.84 0.86 0.86 0.87 0.77 0.70 Wireless 33.27 61.43 19.96 17.83 0.84 0.77 0.84 0.88 44.61 53.41 35.84 31.20 Avila 10.95 1.62 1.07 0.25 0.64 0.78 403.64 43.00 14.28 Letter 3.73 2.88 2.33 0.91 0.93 0.95 16.74 17.81 16.74 RFKM Perf. 18.64% better 5.68% better 46.81% better Table 4: Representativity Fairness Evaluation. Notes: (i) Arrows next to measures indicate whether higher or lower values are desirable. (ii) The best value for each measure on each dataset is highlighted in bold. (iii) Some runs of the Greedy approach did not complete in reasonable amounts of time and memory, and thus, those cells are greyed out. Dataset Avg \u2193 Sil \u2191 Pur \u2191 Name LS Greedy KM RFKM LS Greedy KM RFKM LS Greedy KM RFKM Iris 0.83 0.78 0.65 0.68 0.49 0.51 0.55 0.55 0.90 0.95 0.89 0.89 Yeast 0.18 0.20 0.19 0.19 0.10 3.68E-3 0.26 0.27 0.32 0.32 0.42 0.41 Wireless 13.22 14.18 10.46 11.13 0.33 0.25 0.40 0.39 0.88 0.82 0.93 0.90 Avila 1.89 1.70 1.92 0.03 0.15 0.18 0.41 0.46 0.42 Letter 6.07 6.41 6.47 0.08 0.15 0.15 0.05 0.16 0.16 RFKM Perf. 7.20% behind 4.27% better 4.12% behind Table 5: Clustering Quality Evaluation. Notes: (i) Arrows next to measures indicate whether higher or lower values are desirable. (ii) The best value for each measure on each dataset is highlighted in bold. (iii) Some runs of the Greedy approach did not complete in reasonable amounts of time and memory, and thus, those cells are greyed out. highest in 10 out of 15 combinations), a straight comparison pitting RFKM against \ud835\udc3e\ud835\udc40would evaluate to a 4.97% deterioration on \ud835\udc34\ud835\udc63\ud835\udc54and 3.14% deterioration on \ud835\udc43\ud835\udc62\ud835\udc5f. These deteriorations are seen to be quite limited, and quite small when compared to the gains achieved on representativity fairness. Turning our attention to \ud835\udc46\ud835\udc56\ud835\udc59, RFKM records a different picture. RFKM is seen to be performing better than the baselines quite consistently on \ud835\udc46\ud835\udc56\ud835\udc59, and records an average of 4.27% improvement. While this indeed be regarded as surprising, the cluster representative learning step in RFKM offers some cues to explain this result. \ud835\udc46\ud835\udc56\ud835\udc59measures how well separated the cluster representatives are, with respect to the objects in the dataset. The cluster representative learning step in RFKM accords higher weighting to far off data objects, dragging the representative towards them. To ensure meaningful movement, over iterations, it is plausible that different cluster representatives be dragged in different directions, enhancing their mutual separation. Such effects are likely behind the better RFKM performance on \ud835\udc46\ud835\udc56\ud835\udc59, and these observations point to interesting future work as to the use of representativity fairness in more general scenarios that focus on particular aspects of clustering quality that are aligned with \ud835\udc46\ud835\udc56\ud835\udc59. 6.3.3 Trends with \ud835\udf061. We now analyze the RFKM performance against the various measures, varying the value of \ud835\udf061 from 0.5 to 2.0 in steps of 0.5. \ud835\udf061 is a way of setting the strength of the second term in Eq. 9, the term that strives to reduce the larger representativity losses towards ensuring lower \ud835\udc49\ud835\udc4e\ud835\udc5fand higher \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5bscores. The second term is also related to \ud835\udc40\ud835\udc4e\ud835\udc65, though that is more directly handled by the third term. Thus, with increasing Figure 3: Avila: \ud835\udc34\ud835\udc63\ud835\udc54and \ud835\udc46\ud835\udc56\ud835\udc59vs. \ud835\udf061 Figure 4: Avila: \ud835\udc43\ud835\udc62\ud835\udc5fand \ud835\udc49\ud835\udc4e\ud835\udc5fvs. \ud835\udf061 \fWebSci \u201920, July 6\u201310, 2020, Southampton, United Kingdom Deepak P and Savitha Sam Abraham Figure 5: Avila: \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5band \ud835\udc40\ud835\udc4e\ud835\udc65vs. \ud835\udf061 \ud835\udf061, we would expect higher values of Jain and Avg, and lower values on Var, Max, Sil and Pur. In other words, we would expect better values on representativity fairness measures and poorer values on clustering quality measures. We observed consistent trends across the datasets, and plot only the Avila dataset trends here for brevity. With each of these measures being in different ranges, we plot these across Figures 3, 4 and 5, each figure plotting two measures, one on the left Y axis and another on the right Y axis; the legend indicates whether the measure is plotted on the left or right. Across these plots, we observe consistent, gradual and expected trends on \ud835\udc34\ud835\udc63\ud835\udc54(increase recorded in Fig 3), \ud835\udc49\ud835\udc4e\ud835\udc5f(decrease recorded in Fig 4) and \ud835\udc3d\ud835\udc4e\ud835\udc56\ud835\udc5b(increase recorded in Fig 5). \ud835\udc46\ud835\udc56\ud835\udc59and \ud835\udc43\ud835\udc62\ud835\udc5fare seen to be swinging within very small ranges (Fig 3 and Fig 4) and may be regarded as stable within these ranges of \ud835\udf061. On the other hand, \ud835\udc40\ud835\udc4e\ud835\udc65shows consistent and expected decrease except for the case of \ud835\udf061 = 0.5, as seen in Fig 5. Overall, these illustrate that the second term broadly works in line with the intuition in Sec 5.1. It is also interesting to note that the evaluation measures are fairly smooth on \ud835\udf061 and do not cause any abrupt changes. 7" + }, + { + "url": "http://arxiv.org/abs/2007.07838v1", + "title": "Whither Fair Clustering?", + "abstract": "Within the relatively busy area of fair machine learning that has been\ndominated by classification fairness research, fairness in clustering has\nstarted to see some recent attention. In this position paper, we assess the\nexisting work in fair clustering and observe that there are several directions\nthat are yet to be explored, and postulate that the state-of-the-art in fair\nclustering has been quite parochial in outlook. We posit that widening the\nnormative principles to target for, characterizing shortfalls where the target\ncannot be achieved fully, and making use of knowledge of downstream processes\ncan significantly widen the scope of research in fair clustering research. At a\ntime when clustering and unsupervised learning are being increasingly used to\nmake and influence decisions that matter significantly to human lives, we\nbelieve that widening the ambit of fair clustering is of immense significance.", + "authors": "Deepak P", + "published": "2020-07-08", + "updated": "2020-07-08", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY", + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Fair Machine Learning (Fair ML) is a \ufb02ourishing discipline of study that has gathered much attention in the last several years, starting from an early pioneering work in [Dwork et al., 2012]. Of late, a newly instituted interdisciplinary conference series, ACM FAT*/FAccT1 has bolstered further interest. Broadly, there have been two fairness streams explored in Fair ML literature: (i) individual fairness that prefers adherence to treating similar people similarly, and (ii) group fairness which involves ensuring some notion of \u2019fair\u2019 distribution of analytics results across groups de\ufb01ned on sensitive attributes such as gender, race, ethnicity and religion. Over the past years, signi\ufb01cant progress has been made in fair classi\ufb01cation, with emergence of computational notions such as independence, separation and suf\ufb01ciency [Barocas et al., 2017]. Supervised learning has the luxury of availability of labelled data that encompasses information of historical decisions. In the case of binary decision making (success/fail), the chasm between the base success rate for each sensitive class (e.g., gender) and their representation within the training data provides a fertile ground for the 1https://facctconference.org/ pursuit of fairer supervised learning. On the other hand, unsupervised or exploratory learning does not assume availability of labels in the data, making fairness within unsupervised learning quite a distinct notion from the former. It may be noted that unsupervised learning is of growing signi\ufb01cance in ML, and is often referred to as the next frontier in AI2. Fairness in unsupervised machine learning may be expected to increase in importance with the broadening scope of unsupervised learning, facilitated by the growth of data volumes far outpacing any attempt at getting them labelled. This data growth has been facilitated in the public sector by an expansion of the methods for \u2019passive\u2019 data collection, where data is collected through safety/surveilance cameras and IoT devices as part of smart city infrastructure. In the private sector, the user\u2019s mobility patterns are available to map services (e.g., Google Maps, Bing Maps) and black-box car insurance providers, social interests are available to social media companies (e.g., Facebook, Twitter), and with the advent of PDAs (e.g., Echo, Home), web tech giants can potentially have access to audio conversations within homes. Clustering, arguably the most popular task in unsupervised learning, has seen much fairness-oriented research attention in the last few years. The pioneering work on this stream was one on data pre-processing to facilitate fair clustering [Chierichetti et al., 2017]. Our analysis of the community\u2019s approach to the task across the 15+ papers in literature leads us to an argument that the literature has been quite restricted in scope. This is arguably due to treating it as a wellde\ufb01ned computational task, despite it being much more nuanced due to being situated within the space of a sophisticated landscape of normative principles. We assess fair clustering literature in the backdrop of the political philosophy around fairness and justice, and make the following arguments: \u2022 Normative Target: The normative principles, the space of values targeted, that have been used across fair clustering formulations have been quite narrow in scope, and signi\ufb01cantly narrower than in the case of fair supervised learning. This is to be seen in the backdrop of the plethora of normative principles available in political philosophy. In particular, we observe that most clustering formulations have relied on alleviating disparate impact through representational parity, a pursuit of group fairness that relates to 2https://bit.ly/2zWjTEo Yann LeCun, 2018 Turing Laureate \fluck egalitarianism [Lang, 2009] when sensitive attributes are considered as manifestations of brute luck choices. It is also noteworthy that the relationship between egalitarianism and discrimination avoidance has been argued to be nuanced [Binns, 2018]. \u2022 Shortfall Characterization: Clustering, as a dataset-level optimization task, is very well understood to be complex. Given the complexities, most formulations fall short of achieving the representational parity goal that they target for. Techniques have focused on either bounding the shortfall theoretically, or illustrating empirically that the quantum of shortfall is tolerable. The critical missing piece is that the shortfall, while being quanti\ufb01ed as above, has been left uncharacterized. It has not been elucidated as to what what kind of data objects are likely to suffer more or less from the shortfall. For usage in practical scenarios, especially within public sector, absence of such a characterization of the shortfall could be a potential dealbreaker. \u2022 Application Space: Most clustering formulations seek to achieve their fairness goal in each of the clusters in the output. In a way, they are being application-agnostic and try to ensure that whatever be the downstream application that makes use of the clustering, there is some form of fairness assurance that the techniques provide. However, typical clustering outputs could be used in order to decide from among a small set of decision choices, which could additionally be placed somewhere in the spectrum of positive or negative. Information about the downstream usage of clustering outputs could both: (i) improve the ability to optimize better for the chosen optimization goal, and (ii) render the formulation more suited to particular domains. 2 Case Study: Clustering for Job Shortlisting Towards putting forth the arguments raised above, we will use the backdrop of a setting where clustering is used to inform consequential decisions directly. Consider the case of a heavily oversubscribed job vacancy, where manual perusal of each of them is out of question. Such a scenario is routinely encountered in the case of government jobs in populated developing countries3. We consider a pipeline of clustering usage for such a scenario. First, the received applications would be subject to clustering using a similarity measure that is relevant to assessing the suitability to the job, to generate perhaps hundreds of clusters. Second, a representative application from each cluster, perhaps the medoid, would be subject to manual assessment for suitability to the job. Third, the arrived assessment for the medoid, likely one of shortlist, reject, scrutinize further would be applied to all applications in its cluster. Fourth, those labelled scrutinize further by virtue of enough ambiguity on the suitability assessment, could be subject to further clustering, or if there is enough manual bandwidth available, subject to individual manual assessments. As an illustrative example to appreciate the need for clustering fairness within this pipeline, observe that generating a set of gender-skewed clusters could help reinforce gender stereotypes that play a part in manual perusal. The 3https://www.bbc.co.uk/news/world-asia-india-43551719 cluster-level decisions made over such gender-skewed clusters could then become implicitly gender-aligned. Data analytics\u2019 role in reinforcing social and economic inequalities has been the topic of several recent books [O\u2019neil, 2016]. 3 Normative Target: What to Optimize for The normative principle used in a number of fair clustering formulations is that of assuming that each attribute be either considered sensitive or task-relevant, followed by targeting to preserve the dataset-wide distribution of objects along sensitive attributes within each cluster. For example, with gender regarded sensitive, this translates to ensuring that the gender ratio within each cluster be very similar to the gender ratio in the dataset. Different fair clustering formulations differ in the number and kind of sensitive attributes they admit; such a characterization of literature appears in [Abraham et al., 2020] (Ref. Table 1). The similarities between data objects on task-relevant attributes are deemed to be relevant to the task in the same manner; weights may be attached to attributes to differentiate the quantum of in\ufb02uence, but the nature of the in\ufb02uence remains similar. First, the crisp binary distinction between sensitive and non-sensitive attributes begs apparent criticism. There are often attributes on which discrimination could be avoided, but not necessarily as strongly. For example, the age or region/province attribute could be such; there is typically a higher degree of tolerance towards skew in age and regions (e.g., urban skew), but purely age-homoegeous or regionhomogeneous clusters are nevertheless undesirable. Second, while sensitive attributes are often outcomes of what are called brute luck, there exist other luck-in\ufb02uenced attributes whose placement is not clear in the sensitive/taskrelevant dichotomy. These include the likes of option luck [Dworkin, 2002] which relate to choices made on the face of considerable uncertainty of how things would turn out. For example, a career-break due to startup failure is unlike brute luck, but still not something that the candidate should heavily scored down on. Some addressal of option luck may be achieved by manually engineering covariate features to control. Third, there is signi\ufb01cant space to expand the normative target outside the space of egalitarianism, notably the Rawlsian choice [Rawls, 1971] in the fairnessef\ufb01ciency trade-off. There are other targets within the socalled patterned notions (as outlined in [Nozick, 1974]), and prefer to allocate resources in accordance with patterns such as need or moral desert4. This would be especially true of hiring in the public sector where the government could use such patterned allocation in order to associate esteem with certain values. This would require identifying attributes that correlate with need and desert and treating them specially so that people with similar needs and deserts be clustered together. Desert may often need to be speci\ufb01ed through attribute-combinations; a candidate from a backward region who has shown exceptional interest in a trade despite limited access to facilities may be considered as scoring high on moral desert. Similarity search has explored multiple unconventional and complex aggregation op4Desert (in philosophy) \u2248quality of being considered deserving. \ferators [Deepak and Deshpande, 2015]. The lack of diversity in normative targets is also true of supervised machine learning, though perhaps only to a lesser extent. While we started off observing that group fairness on sensitive attributes has been the mainstay in fair clustering, a few deviant formulations are worthy of mention. Proportionally fair clustering [Chen et al., 2019] proposes an ingenious notion of collective desert; it requires that a suf\ufb01ciently large collective of proximal objects would deserve a cluster of their own. Representativity Fairness [P and Abraham, 2020], on the other hand, prefers egalitarian distribution of the cost of abstraction incurred due to the clustering. 4 Characterization of Residual Unfairness Once the normative target is decided, fair clustering formulations translate the target to a mathematical optimization formulation. With even simple clustering formulations being computationally hard [Mahajan et al., 2012], fair clustering will also involve approximations. These might be in the form of theoretical approximation bounds [Chierichetti et al., 2017; Bera et al., 2019] or demonstration of empirical effectiveness [Abraham et al., 2020]. While it is eminently desirable that the chosen target be achieved as much as possible, it is also useful to have an understanding of how it falls short when it does indeed fall short; this aspect has not been explored at all to our best knowledge. An important question that one may ask is whether the residual unfairness is Rawlsian [Rawls, 1971]; whether it is arranged to the greatest bene\ufb01t of the least advantaged (ref. difference principle). Answers to such questions are crucial for uptake in practical applications since some kinds of systematic unfairness may be considered as intolerable, especially within public sector. Consider the immensely popular K-Means formulation for clustering [MacQueen, 1967], which some fair clustering formulations build upon (e.g., [Abraham et al., 2020; Ziko et al., 2019]). K-Means clusters may be seen as being located within Voronoi cells centered on the cluster means. Since fair clustering algorithms building upon the K-Means framework are intuitively likely to make the pro-fairness adjustments through membership re-assignments at the fringes of clusters, fringe objects would likely bear the cost/bene\ufb01t of fairness more than others. For example, in attributes with a bimodal distribution, say, a mixture of people with no career breaks at all, and long career breaks (e.g., maternity etc.), people with mid-sized career breaks may get reassigned, and could bene\ufb01t or lose out depending on which side of the line they fall. Consider another example of a data pre-processing method for fair clustering; the fairlet clustering method [Chierichetti et al., 2017], in a gender-balanced dataset, would create fairlets as pairs, each pair comprising one from each gender (assuming binary genders for narrative simplicity only). Data objects that do not have an object of the other gender in its vicinity would stand to lose out due to being paired with a far-off object with which it bears shallow resemblance. As from the above two cases and their comparative evaluation, the cost of the fairness adjustments are unlikely to be random and would be borne asymmetrically across dataset objects. Higher volatility, and thus higher bene\ufb01ts or detriments, would likely be placed on objects that deviate much from the implicit data pattern assumptions made within the clustering formulations. While such qualitative differences of fairness shortfalls would be hard to be done away with, a characterization of the fairness shortfall, through quantitative metrics or exemplars, would be necessary to inspire con\ufb01dence that fair clustering formulations do not exacerbate secondary biases while alleviating major ones. 5 Application Space Information The clusters in our job shortlisting scenario, we assumed, would be manually assigned one of three decisions, eventually leading to one of two decisions, shortlist or reject. Once this process is complete, we would obviously only care about whether there is representational parity on sensitive attributes over the shortlisted set (being just two sets, this would implicitly be equivalent to ensuring the same for the rejected set as well). In other words, the upstream clustering algorithm that tried to enforce representational parity in each of the several hundred clusters it generated, was, simply put, addressing a needlessly constrained problem. While it is impossible for the clustering algorithm to foresee the human decisions that would be assigned to each cluster, fair clustering formulations could be re-designed to provide interactive fairness guidance. For example, as soon as a cluster is chosen for the shortlist decision, the clustering could be re-run on the residual dataset with a different fairness target, that which seeks fairness among the clusters conditional on the choice(s) already made (this may be seen as similar in spirit the alternative clustering task [Bae and Bailey, 2006] at the high-level). Another handling of this would be for a one-shot clustering to produce, along with clusters, dependencies among clusters indicating that certain cluster pairs be assigned the same decision. This would be expected in cases of clusters that deviate from fairness in different directions, so this dependency constraint across them would help offset them. Such dependencies could also be envisioned as being one of must-link and cannot-link inspired by related literature on semi-supervised clustering [Basu et al., 2002]. In cases with multiple (discrete/continuous) decision choices in the spectrum of positive to negative decisions, representational parity or other fairness considerations may be higher in certain parts of the decision space than others. For example, we may want to ensure that the set of failed candidates in a course not be very homogeneous on gender or race, whereas these may be more relaxed at the higher grades. In other words, 90% of \ufb01rst graders being of a particular ethnicity may be more tolerable than an outcome where 90% of fails coming from the same ethnic background. In short, information on clustering usage would go a long way in providing computational leeway in the pursuit of the chosen fairness targets for the clustering method. 6 Concluding Notes The above discussion was intended towards unravelling the diverse and inter-disciplinary possibilities in extending the scholarly frontier in fair clustering. We hope that researchers \fwith interests in fair clustering would take note of such myriad research frontiers and diversify fair clustering research, an important task for data-driven decision making for the future." + }, + { + "url": "http://arxiv.org/abs/1803.09641v1", + "title": "Unsupervised Separation of Transliterable and Native Words for Malayalam", + "abstract": "Differentiating intrinsic language words from transliterable words is a key\nstep aiding text processing tasks involving different natural languages. We\nconsider the problem of unsupervised separation of transliterable words from\nnative words for text in Malayalam language. Outlining a key observation on the\ndiversity of characters beyond the word stem, we develop an optimization method\nto score words based on their nativeness. Our method relies on the usage of\nprobability distributions over character n-grams that are refined in step with\nthe nativeness scorings in an iterative optimization formulation. Using an\nempirical evaluation, we illustrate that our method, DTIM, provides significant\nimprovements in nativeness scoring for Malayalam, establishing DTIM as the\npreferred method for the task.", + "authors": "Deepak P", + "published": "2018-03-26", + "updated": "2018-03-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "I.2.7" + ], + "main_content": "Introduction Malayalam is an agglutinative language from the southern Indian state of Kerala where it is the of\ufb01cial state language. It is spoken by 38 million native speakers, three times as many speakers as Hungarian (Vincze et al., 2013) or Greek (Ntoulas et al., 2001), for which specialized techniques have been developed in other contexts. The growing web presence of Malayalam necessitates automatic techniques to process Malayalam text. A major hurdle in harnessing Malayalam text from social and web media for multilingual retrieval and machine translation is the presence of a large amount of transliterable words. By transliterable words, we mean both (a) words (from English) like police and train that virtually always appear in transliterated form in contemporary Malayalam, and (b) proper nouns such as names that need to be transliterated than translated to correlate with English text. On a manual analysis of a news article dataset, we found that transliterated words and proper nouns each form 10-12% of all distinct words. It is useful to transliterate such words for scenarios that involve processing Malayalam text in the company of English text; this will avoid them being treated as separate index terms (wrt their transliteration) in a multi-lingual retrieval engine, and help a statistical translation system to make use of the link to improve effectiveness. In this context, it ia notable that there has been recent interest in devising specialized methods to translate words that fall outside the core vocabulary (Tsvetkov and Dyer, 2015). In this paper, we consider the problem of separating out such transliterable words from the other words within an unlabeled dataset; we refer to the latter as \u201cnative\u201d words. We propose an unsupervised method, DTIM, that takes a dictionary of distinct words from a Malayalam corpus and scores each word based on their nativeness. Our optimization method, DTIM, iteratively re\ufb01nes the nativeness scoring of each word, leveraging dictionary-level statistics modelled using character n-gram probability distributions. Our empirical analysis establishes the effectiveness of DTIM. We outline related work in the area in Section 2. This is followed by the problem statement in Section 3 and the description of our proposed approach in Section 4. Our empirical analysis forms Section 5 followed by conclusions in Section 7. 2 Related Work Identi\ufb01cation of transliterable text fragments, being a critical task for cross-lingual text analysis, has attracted attention since the 1990s. While most methods addressing the problem have used supervised learning, there have been some methods that can work without labeled data. We brie\ufb02y survey both classes of methods. \f2.1 Supervised and \u2018pseudo-supervised\u2019 Methods An early work(Chen and Lee, 1996) focuses on a sub-problem, that of supervised identi\ufb01cation of proper nouns for Chinese. (Jeong et al., 1999) consider leveraging decision trees to address the related problem of learning transliteration and back-transliteration rules for English/Korean word pairs. Recognizing the costs of procuring training data, (Baker and Brew, 2008) and (Goldberg and Elhadad, 2008) explore usage of pseudo-transliterable words generated using transliteration rules on an English dictionary for Korean and Hebrew respectively. Such pseudo-supervision, however, would not be able to generate uncommon domain-speci\ufb01c terms such as medical/scienti\ufb01c terminology for usage in such domains (unless speci\ufb01cally tuned), and is hence limited in utility. 2.2 Unsupervised Methods A recent work proposes that multi-word phrases in Malayalam text where their component words exhibit strong co-occurrence be categorized as transliterable phrases (Prasad et al., 2014). Their intuition stems from observing contiguous words such as test dose which often occur in transliterated form while occurring together, but get replaced by native words in other contexts. Their method is however unable to identify single transliterable words, or phrases involving words such as train and police whose transliterations are heavily used in the company of native Malayalam words. A recent method for Korean (Koo, 2015) starts by identifying a seed set of transliterable words as those that begin or end with consonant clusters and have vowel insertions; this is speci\ufb01c to Korean since Korean words apparently do not begin or end with consonant clusters. Highfrequency words are then used as seed words for native Korean for usage in a Naive Bayes classi\ufb01er. In addition to the outlined reasons that make both the unsupervised methods inapplicable for our task, they both presume availability of corpus frequency statistics. We focus on a general scenario assuming the availability of only a word lexicon. 2.3 Positioning the Transliterable Word Identi\ufb01cation Task Nativeness scoring of words may be seen as a vocabulary strati\ufb01cation step (upon usage of thresholds) for usage by downstream applications. A multi-lingual text mining application that uses Malayalam and English text would bene\ufb01t by transliterating non-native Malayalam words to English, so the transliterable Malayalam token and its transliteration is treated as the same token. For machine translation, transliterable words may be channeled to specialized translation methods (e.g., (Tsvetkov and Dyer, 2015)) or for manual screening and translation. 3 Problem De\ufb01nition We now de\ufb01ne the problem more formally. Consider n distinct words obtained from Malayalam text, W = {. . . , w, . . .}. Our task is to devise a technique that can use W to arrive at a nativeness score for each word, w, within it, as wn. We would like wn to be an accurate quanti\ufb01cation of native-ness of word w. For example, when words in W are ordered in the decreasing order of wn scores, we expect to get the native words at the beginning of the ordering and vice versa. We do not presume availability of any data other than W; this makes our method applicable across scenarios where corpus statistics are unavailable due to privacy or other reasons. 3.1 Evaluation Given that it is easier for humans to crisply classify each word as either native or transliterable (nouns or transliterated english words) in lieu of attaching a score to each word, the nativeness scoring (as generated by a scoring method such as ours) often needs to be evaluated against a crisp nativeness assessment, i.e., a scoring with scores in {0, 1}. To aid this, we consider the ordering of words in the labeled set in the decreasing (or more precisely, non-increasing) order of nativeness scores (each method produces an ordering for the dataset). To evaluate this ordering, we use two sets of metrics for evaluation: \u2022 Precision at the ends of the ordering: Topk precision denotes the fraction of native words within the k words at the top of the ordering; analogously, Bottom-k precision is the fraction of transliterable words among \fthe bottom k. Since a good scoring would likely put native words at the top of the ordering and the transliterable ones at the bottom, a good scoring method would intuitively score high on both these metrics. We call the average of the top-k and bottom-k precision for a given k, as Avg-k precision. These measures, evaluated at varying values of k, indicate the quality of the nativeness scoring. \u2022 Clustering Quality: Consider the cardinalities of the native and transliterable words from the labeled set as being N and T respectively. We now take the top-N words and bottom-T words from the ordering generated by each method, and compare against the respective labeled sets as in the case of standard clustering quality evaluation1. Since the cardinalities of the generated native (transliterable) cluster and the native (transliterable) labeled set is both N (T), the Recall of the cluster is identical to its Purity/Precision, and thus, the F-measure too; we simply call it Clustering Quality. A cardinalityweighted average of the clustering quality across the native and transliterable clusters yields a single value for the clustering quality across the dataset. It may be noted that we are not making the labeled dataset available to the method generating the ordering, instead merely using it\u2019s cardinalities for evaluation purposes. 4 Our Method: DTIM We now introduce our method, Diversity-based Transliterable Word Identi\ufb01cation for Malayalam (DTIM). We use probability distributions over character n-grams to separately model transliterable and native words, and develop an optimization framework that alternatively re\ufb01nes the ngram distributions and nativeness scoring within each iteration. DTIM involves an initialization that induces a \u201ccoarse\u201d separation between native and transliterable words followed by iterative re\ufb01nement. The initialization is critical in optimization methods that are vulnerable to local optima; the pure word distribution needs to be initialized to \u201ccoarsely\u201d prefer pure words over transliterable words. This will enable further iterations to exploit the initial preference direction to further re\ufb01ne the model to \u201cattract\u201d the pure words 1https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html more strongly and weaken any initial preference to transliterable words. The vice versa holds for the transliterable word models. We will \ufb01rst outline the initialization step followed by the description of the method. 4.1 Diversity-based Initialization Our initialization is inspired by an observation on the variety of suf\ufb01xes attached to a word stem. Consider a word stem |pu|ra|2, a stem commonly leading to native Malayalam words; its suf\ufb01xes are observed to start with a variety of characters such as |ttha| (e.g., |pu|ra|ttha|kki|), |me| (e.g., |pu|ra|me|), |mbo| (e.g., |pu|ra|mbo|kku|) and |ppa| (e.g., |pu|ra|ppa|du|). On the other hand, stems that mostly lead to transliterable words often do not exhibit so much of diversity. For example, |re|so| is followed only by |rt| (i.e., resort) and |po|li| is usually only followed by |s| (i.e., police). Some stems such as |o|ppa| lead to transliterations of two English words such as open and operation. Our observation, upon which we model the initialization, is that the variety of suf\ufb01xes is generally correlated with native-ness (i.e., propensity to lead to a native word) of word stems. This is intuitive since non-native word stems provide limited \ufb02exibility to being modi\ufb01ed by derivational or in\ufb02ectional suf\ufb01xes as compared to native ones. For simplicity, we use the \ufb01rst two characters of each word as the word stem; we will evaluate the robustness of DTIM to varying stem lengths in our empirical evaluation, while consistently using the stem length of two characters in our description. We start by associating each distinct word stem in W with the number of unique third characters that follow it (among words in W); in our examples, |pu|ra| and |o|pa| would be associated with 4 and 2 respectively. We initialize the native-ness weights as proportional to the diversity of 3rd characters beyond the stem: wn0 = min \u001a 0.99, |u3(wstem, W)| \u03c4 \u001b (1) where u3(wstem, W) denotes the set of third characters that follow the stem of word w among words in W. We \ufb02atten off wn0 scores beyond a diversity of \u03c4 (note that a diversity of \u03c4 or 2We will represent Malayalam words in transliterated form for reading by those who might not be able to read Malayalam. A pipe would separate Malayalam characters; for example |pu| corresponds to a single Malayalam character. \fhigher will lead to the second term becoming 1.0 or higher, kicking in the min function to choose 0.99 for wn0) as shown in the above equation. We give a small transliterable-ness weight even to highly diverse stems to reduce over-reliance on the initialization. We set \u03c4 = 10 based on our observation from the dataset that most word stems having more than 10 distinct characters were seen to be native. As in the case of word stem length, we study DTIM trends across varying \u03c4 in our empirical analysis. wn0 is in [0, 1]; analogously, (1 \u2212 wn0) may be regarded as a score of transliterableness. 4.2 Objective Function and Optimization Framework As outlined earlier, we use separate character ngram probability distributions to model native and transliterable words. We would like these probability distributions support the nativeness scoring, and vice versa. While the size of the n-grams (i.e., whether n = 1, 2, 3 or 4) is a system-level parameter, we use n = 1 for simplicity in our description. We denote the native and transliterable distributions as N and T respectively, with N(c) and T (c) denoting the weight associated with the character c according to the distributions. Consider the following function, given a particular state for the N, T and wns: Y w\u2208W Y c\u2208w \u0012 w2 n \u00d7 N(c) + (1\u2212wn)2 \u00d7 T (c) \u0013 (2) This measures the aggregate supports for words in W, the support for each word measured as an interpolated support from across the distributions N and T with weighting factors squares of the nativeness scores (i.e., wns) and transliterableness scores (i.e., (1 \u2212 wn)s) respectively. Similar mixing models have been used earlier in emotion lexicon learning (Bandhakavi et al., 2014) and solution post discovery (Deepak and Visweswariah, 2014). The squares of the nativeness scores are used in our model (instead of the raw scores) for optimization convenience. A highly native word should intuively have a high wn (nativeness) and a high support from N and correspondingly low transliterable-ness (i.e., (1 \u2212wn)) and support from T ; a highly transliterable word would be expected to have exactly the opposite. Due to the design of Eq. 2 in having the higher terms multiplied with each other (and so for the lower terms), this function would be maximized for a desirable estimate of the variables \u03b8 = {N, T , {. . . , wn, . . .}}. Conversely, by striving to maximize the objective function, we would arrive at a desirable estimate of the variables. An alternative construction yielding a minimizing objective would be as follows: Y w\u2208W Y c\u2208w \u0012 (1\u2212wn)2 \u00d7 N(c) + w2 n \u00d7 T (c) \u0013 (3) In this form, given a good estimate of the variables, the native (transliterable) words have their nativeness (transliterableness) weights multiplied with the support from the transliterable (native) models. In other words, maximizing the objective in Eq. 2 is semantically similar to minimizing the objective in Eq. 3. As we will illustrate soon, it is easier to optimize for N and T using the maximizing formulation in Eq. 2 while the minimizing objective in Eq. 3 yields better to optimize for the word nativeness scores, {. . . , wn, . . .}. 4.3 Learning N and T using the Maximizing Objective We start by taking the log-form of the objective in Eq. 2 (this does not affect the optimization direction), yielding: Omax = X w\u2208W X c\u2208w ln \u0012 w2 n\u00d7N(c)+(1\u2212wn)2\u00d7T (c) \u0013 (4) The distributions, being probability distributions over n-grams, should sum to zero. This constraint, for our unigram models, can be written as: X c N(c) = X c T (c) = 1 (5) Fixing the values of {. . . , wn, . . .} and T (or N), we can now identify a better estimate for N (or T ) by looking for an optima (i.e., where the objective function has a slope of zero). Towards that, we take the partial derivative (or slope) of the objective for a particular character. : \u2202Omax \u2202N(c\u2032) = \u0012 X w\u2208W freq(c\u2032, w) \u00d7 w2 n \u0000w2 nN(c\u2032) + (1 \u2212wn)2T (c\u2032) \u0001 \u0013 +\u03bbN (6) where freq(c\u2032, w) is the frequency of the character c\u2032 in w and \u03bbN denotes the Lagrangian multiplier corresponding to the sum-to-unity constraint \ffor N. Equating this to zero does not however yield a closed form solution for N \u2032, but a simple re-arrangement yields an iterative update formula: N(c\u2032) \u221d X w\u2208W freq(c\u2032, w) \u00d7 w2 n \u0000w2 n + (1 \u2212wn)2 T (c\u2032) NP (c\u2032) \u0001 (7) The N term in the RHS is denoted as NP to indicate the usage of the previous estimate of N. The sum-to-one constraint is trivially achieved by \ufb01rst estimating the N(c\u2032)s by treating Eq. 7 as equality, followed by normalizing the scores across the character vocabulary. Eq. 7 is intuitively reasonable, due to establishing a somewhat direct relationship between N and wn (in the numerator), thus allowing highly native words to contribute more to building N. The analogous update formula for T \ufb01xing N turns out to be: T (c\u2032) \u221d X w\u2208W freq(c\u2032, w) \u00d7 (1 \u2212wn)2 \u0000(1 \u2212wn)2 + w2 n N (c\u2032) TP (c\u2032) \u0001 (8) Eq. 7 and Eq. 8 would lead us closer to a maxima for Eq. 4 is their second (partial) derivatives are negative3. To verify this, we note that the second (partial) derivative wrt N(c\u2032) is as follows \u22022Omax \u22022N(c\u2032) = (\u22121)\u00d7 X w\u2208W freq(c\u2032, w)(w2 n)2 \u0000w2 nN(c\u2032) + (1 \u2212wn)2T (c\u2032) \u00012 (9) It is easy to observe that the RHS is a product of \u22121 and a sum of a plurality of positive terms (square terms that are trivially positive, with the exception being the freq(., .) term which is also non-negative by de\ufb01nition), altogether yielding a negative value. That the the second (partial) derivative is negative con\ufb01rms that the update formula derived from the \ufb01rst partial derivative indeed helps in maximizing Omax wrt N(c\u2032). A similar argument holds for the T (c\u2032) updates as well, which we omit for brevity. 3http://mathworld.wolfram.com/SecondDerivativeTest.html 4.4 Learning the nativeness scores, {. . . , wn, . . .}, using the Minimizing Objective Analogous to the previous section, we take the log-form of Eq. 3: Omin = X w\u2208W X c\u2208w ln \u0012 (1\u2212wn)2\u00d7N(c)+w2 n\u00d7T (c) \u0013 (10) Unlike the earlier case, we do not have any constraints since the sum-to-unit constraint on the nativeness and transliterableness scores are built in into the construction. We now \ufb01x the values of all other variables and \ufb01nd the slope wrt w\u2032 n, where w\u2032 indicates a particular word in W. \u2202Omin \u2202w\u2032 n = X c\u2208w\u2032 2w\u2032 nT (c) + 2w\u2032 nN(c) \u22122N(c) \u0000w\u20322 n T (c) + (1 \u2212w\u2032 n)2N(c) \u0001 (11) We equate the slope to zero and form an iterative update formula, much like in the distribution estimation phase. w\u2032 n = P c\u2208w\u2032 N (c) (w\u20322 n T (c)+(1\u2212w\u2032 n)2N (c)) P c\u2208w\u2032 N (c)+T (c) (w\u20322 n T (c)+(1\u2212w\u2032 n)2N (c)) (12) Using the previous estimates of w\u2032 n for the RHS yields an iterative update form for the nativeness scores. As in the model estimation phase, the update rule establishes a reasonably direct relationship between w\u2032 n and N. Since our objective is to minimize Omin, we would like to verify the direction of optimization using the second partial derivative. \u22022Omin \u22022w\u2032 n = X c\u2208w\u2032 N(c)T (c) \u2212 \u0000w\u2032 nT (c) \u2212(1 \u2212w\u2032 n)N(c) \u00012 \u0000w\u20322 n T (c) + (1 \u2212w\u2032 n)2N(c) \u00012 (13) We provide an informal argument for the positivity of the second derivative; note that the denominator is a square term making it enough to analyze just the numerator term. Consider a highly native word (high w\u2032 n) whose characters would intuitively satisfy N(c) > T (c). For the boundary case of w\u2032 n = 1, the numerator term reduces \fto T (c) \u00d7 (N(c) \u2212T (c)) which would be positive given the expected relation between N(c) and T (c). A similar argument holds for highly transliterable words. For words with w\u2032 n \u21920.5 where we would expect N(c) \u2248T (c), the numerator becomes N(c)T (c)\u22120.25(T (c)\u2212N(c))2, which is expected to be positive since the difference term is small, making it\u2019s square very small in comparison to the \ufb01rst product term. To outline the informal nature of the argument, it may be noted that T (c) > N(c) may hold for certain characters within highly native words; but as long as most of the characters within highly native words satisfy the N(c) > T (c), there would be suf\ufb01cient positivity to offset the negative terms induced with such outlier characters. Algorithm 1: DTIM Input: A set of Malayalam words, W Output: A nativeness scoring wn \u2208[0, 1] for every word w in W Hyper-parameters: word stem length, \u03c4, n Initialize the wn scores for each word using the diversity metric in Eq. 1 using the chosen stem length and \u03c4 while not converged and number of iterations not reached do Estimate n-gram distributions N and T using Eq. 7 and Eq. 8 respectively Learn nativeness weights for each word using Eq. 12 end return latest estimates of nativeness weights 4.5 DTIM: The Method Having outlined the learning steps, the method is simply an iterative usage of the learning steps as outlined in Algorithm 1. In the \ufb01rst invocation of the distribution learning step where previous estimates are not available, we simply assume a uniform distribution across the n-gram vocabulary for usage as the previous estimates. Each of the update steps are linear in the size of the dictionary, making DTIM a computationally lightweight method. Choosing n = 2 instead of unigrams (as used in our narrative) is easy since that simply involves replacing the c \u2208w all across the update steps by [ci, ci+1] \u2208w with [ci, ci+1] denoting pairs of contiguous characters within the word; similarly, n = 3 involves usage of contiguous character triplets and correspondingly learning the distributions N and T over triplets. The DTIM structure is evidently inspired by the ExpectationMaximization framework (Dempster et al., 1977) involving alternating optimizations of an objective function; DTIM, however, uses different objective functions for the two steps for optimization convenience. 5 Experiments We now describe our empirical study of DTIM, starting with the dataset and experimental setup leading on to the results and analyses. 5.1 Dataset We evaluate DTIM on a set of 65068 distinct words from across news articles sourced from Mathrubhumi4, a popular Malayalam newspaper; this word list is made available publicly5. For evaluation purposes, we got a random subset of 1035 words labeled by one of three human annotators; that has been made available publicly6 too, each word labeled as either native, transliterable or unknown. There were approximately 3 native words for every transliterable word in the labeled set, re\ufb02ective of distribution in contemporary Malayalam usage as alluded to in the introduction. We will use the whole set of 65068 words as input to the method, while the evaluation would obviously be limited to the labelled subset of 1035 words. 5.2 Baselines As outlined in Section 2, the unsupervised version of the problem of telling apart native and transliterable words for Malayalam and/or similar languages has not been addressed in literature, to the best of our knowledge. The unsupervised Malayalam-focused method(Prasad et al., 2014) (Ref: Sec 2.2) is able to identify only transliterable word-pairs, making it inapplicable for contexts such as a health data scenario where individual english words are often transliterated for want of a suitable malayalam alternative. The Korean method(Koo, 2015) is too speci\ufb01c to Korean language and cannot be used for other languages due to the absence of a generic high-precision rule to identify a seed set of transliterable words. With both the unsuper4http://www.mathrubhumi.com 5Dataset: https://goo.gl/DOsFES 6Labeled Set: https://goo.gl/XEVLWv \fvised state-of-the-art approaches being inapplicable for our task, we compare against an intuitive generalization-based baseline, called GEN, that orders words based on their support from the combination of a unigram and bi-gram character language model learnt over W; this leads to a scoring as follows: wGEN n = Y [ci,ci+1]\u2208w \u03bb \u00d7 BW(ci+1|ci) + (1\u2212\u03bb) \u00d7 UW(ci+1) (14) where BW and UW are bigram and unigram character-level language models built over all words in W. We set \u03bb = 0.8 (Smucker and Allan, 2006). We experimented with higher-order models in GEN, but observed drops in evaluation measures leading to us sticking to the usage of the unigram and bi-gram models. The form of Eq. 14 is inspired by an assumption similar to that used in both (Prasad et al., 2014) and (Koo, 2015) that transliterable words are rare. Thus, we expect they would not be adequately supported by models that generalize over whole of W. We also compare against our diversitybased initialization score from Section 4.1, which we will call as INIT. For ease of reference, we outline the INIT scoring: wINIT n = min \u001a 0.99, |u3(wstem, W)| \u03c4 \u001b (15) The comparison against INIT enables us to isolate and highlight the value of the iterative update formulation vis-a-vis the initialization. 5.3 Evaluation Measures and Setup As outlined in Section 3, we use top-k, bottom-k and avg-k precision (evaluated at varying values of k) as well as clustering quality in our evaluation. For the comparative evaluaton, we set DTIM parameters as the following: \u03c4 = 10 and a wordstem length of 2. We will study trends against variations across these parameters in a separate section. 5.4 Experimental Results 5.4.1 Precision at the ends of the Ordering Table 1 lists the precision measures over various values of k. It may be noted that any instantiation of DTIM (across the four values of n-gram size, n) is able to beat the baselines convincingly on each metric on each value of k, convincingly establishing the effectiveness of the DTIM formulation. DTIM is seen to be much more effective in separating out the native and transliterable words at either ends of the ordering, than the baselines. It is also notable that EM-style iterations are able to signi\ufb01cantly improve upon the initialization (i.e., INIT). That the bottom-k precision is seen to be consistently lower than top-k precision needs to be juxtaposed in the context of the observation that there were only 25% transliterable words against 75% native words; thus, the lift in precision against a random ordering is much more substantial for the transliterable words. The trends across varying n-gram sizes (i.e., n) in DTIM is worth noting too; the higher values of n (such as 3 and 4) are seen to make more errors at the ends of the lists, whereas they catch-up with the n \u2208{1, 2} versions as k increases. This indicates that smaller-n DTIM is being able to tell apart a minority of the words exceedingly well (wrt native-ness), whereas the higher n-gram modelling is able to spread out the gains across a larger spectrum of words in W. Around n = 4 and beyond, sparsity effects (since 4-grams and 5-grams would not occur frequently, making it harder to exploit their occurrence statistics) are seen to kick in, causing reductions in precision. 5.4.2 Clustering Quality Table 2 lists the clustering quality metric across the various methods. Clustering quality, unlike the precision metrics, is designed to evaluate the entire ordering without limiting the analysis to just the top-k and bottom-k entries. As in the earlier case, DTIM convincingly outperforms the baselines by healthy margins across all values of n. Consequent to the trends across n as observed earlier, n \u2208{3, 4} are seen to deliver better accuracies, with such gains tapering off beyond n = 4 due to sparsity effects. The words, along with the DTIM nativeness scores for n = 3, can be viewed at https://goo.gl/OmhlB3 (the list excludes words with fewer than 3 characters). 5.5 Analyzing DTIM We now analyze the performance of DTIM across varying values of the diversity threshold (\u03c4) and word-stem lengths. \fk=50 k=100 k=150 k=200 Top-k Bot-k Avg-k Top-k Bot-k Avg-k Top-k Bot-k Avg-k Top-k Bot-k Avg-k INIT 0.88 0.50 0.69 0.90 0.40 0.65 0.90 0.41 0.66 0.90 0.38 0.64 GEN 0.64 0.10 0.37 0.58 0.11 0.35 0.60 0.15 0.38 0.64 0.17 0.41 DTIM (n=1) 0.94 0.64 0.79 0.90 0.56 0.73 0.90 0.49 0.70 0.92 0.48 0.70 DTIM (n=2) 1.00 0.78 0.89 0.94 0.68 0.81 0.93 0.57 0.75 0.95 0.52 0.74 DTIM (n=3) 0.86 0.76 0.81 0.91 0.75 0.83 0.92 0.69 0.81 0.92 0.64 0.78 DTIM (n=4) 0.82 0.74 0.78 0.87 0.69 0.78 0.83 0.62 0.73 0.85 0.65 0.75 Table 1: Top-k and Bottom-k Precision (best result in each column highlighted) Native Transliterable Weighted Average INIT 0.79 0.38 0.69 GEN 0.73 0.17 0.59 DTIM (n=1) 0.81 0.44 0.72 DTIM (n=2) 0.84 0.50 0.75 DTIM (n=3) 0.86 0.60 0.79 DTIM (n=4) 0.86 0.60 0.79 Table 2: Clustering Quality (best result in each column highlighted) \u03c4 \u2192 5 10 20 50 100 1000 n = 1 0.72 0.72 0.72 0.72 0.72 0.72 n = 2 0.74 0.75 0.75 0.74 0.74 0.74 n = 3 0.77 0.79 0.78 0.78 0.78 0.78 n = 4 0.78 0.79 0.79 0.79 0.79 0.79 Table 3: DTIM Clustering Quality against \u03c4 Stem Length \u2192 1 2 3 4 n = 1 0.64 0.72 0.75 0.56 n = 2 0.58 0.75 0.74 0.55 n = 2 0.59 0.79 0.69 0.60 n = 2 0.58 0.79 0.69 0.62 Table 4: DTIM Clustering Quality against Word Stem Length (best result in each row highlighted) 5.5.1 Diversity Threshold Table 3 analyzes the clustering quality trends of DTIM across varying values of \u03c4. The table suggests that DTIM is extremely robust to variations in diversity threshold, despite a slight preference towards values around 10 and 20. This suggests that a system designer looking to use DTIM need not carefully tune this parameter due to the inherent robustness. 5.5.2 Word Stem Length Given the nature of Malayalam language where the variations in word lengths are not as high as in English, it seemed very natural to use a word stem length of 2. Moreover, very large words are uncommon in Malayalam. In our corpus, 50% of words were found to contain \ufb01ve characters or less, the corresponding fraction being 71% for upto seven characters. Our analysis of DTIM across variations in word-stem length, illustrated in Table 4 strongly supports this intuition with clustering quality peaking at stem-length of 2 for n \u22652. It is notable, however, that DTIM degrades gracefully on either side of 2. Trends across different settings of word-stem length are interesting since they may provide clues about applicability for other languages with varying character granularities (e.g., each Chinese character corresponds to multiple characters in Latin-script). 6 Discussion 6.1 Applicability to Other Languages In contrast to earlier work focused on speci\ufb01c languages (e.g., (Koo, 2015)) that use heuristics that are very speci\ufb01c to the language (such as expected patterns of consonants), DTIM heuristics are general-purpose in design. The only heuristic setting that is likely to require some tuning for applicability in other languages is the word-stem length. We expect the approach would generalize \fwell to other Sanskrit-in\ufb02uenced Dravidian languages such as Kannada/Telugu. Unfortunately, we did not have any Kannada/Telugu/Kodava knowledge (Dravidian languages have largely disjoint speaker-populations) in our team, or access to labelled datasets in those languages (they are low-resource languages too); testing this on Kannada/Telugu/Tamil would be interesting future work. The method is expected to be less applicable to English, the language being signi\ufb01cantly different and with potentially fewer transliterable words. 6.2 DTIM in an Application Context Within any target application context, machinelabelled transliterable words (and their automatically generated transliterations) may need to manual screening for accountability reasons. The high accuracy at either ends of the ordering lends itself to be exploited in the following fashion; in lieu of employing experts to verify all labellings/transliterations, low-expertise volunteers (e.g., students) can be called in to verify labellings at the ends (top/bottom) of the lists with experts focusing on the middle (more ambiguous) part of the list; this frees up experts\u2019 time as against a cross-spectrum expert-veri\ufb01cation process, leading to direct cost savings. We also expect that DTIM followed by automatic transliterations of bottom-k words would aid in retrieval and machine translation scenarios. 7" + } + ], + "Savitha Sam Abraham": [ + { + "url": "http://arxiv.org/abs/2403.03203v1", + "title": "CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments", + "abstract": "The integration of learning and reasoning is high on the research agenda in\nAI. Nevertheless, there is only a little attention to use existing background\nknowledge for reasoning about partially observed scenes to answer questions\nabout the scene. Yet, we as humans use such knowledge frequently to infer\nplausible answers to visual questions (by eliminating all inconsistent ones).\nSuch knowledge often comes in the form of constraints about objects and it\ntends to be highly domain or environment-specific. We contribute a novel\nbenchmark called CLEVR-POC for reasoning-intensive visual question answering\n(VQA) in partially observable environments under constraints. In CLEVR-POC,\nknowledge in the form of logical constraints needs to be leveraged to generate\nplausible answers to questions about a hidden object in a given partial scene.\nFor instance, if one has the knowledge that all cups are colored either red,\ngreen or blue and that there is only one green cup, it becomes possible to\ndeduce the color of an occluded cup as either red or blue, provided that all\nother cups, including the green one, are observed. Through experiments, we\nobserve that the low performance of pre-trained vision language models like\nCLIP (~ 22%) and a large language model (LLM) like GPT-4 (~ 46%) on CLEVR-POC\nascertains the necessity for frameworks that can handle reasoning-intensive\ntasks where environment-specific background knowledge is available and crucial.\nFurthermore, our demonstration illustrates that a neuro-symbolic model, which\nintegrates an LLM like GPT-4 with a visual perception network and a formal\nlogical reasoner, exhibits exceptional performance on CLEVR-POC.", + "authors": "Savitha Sam Abraham, Marjan Alirezaie, Luc De Raedt", + "published": "2024-03-05", + "updated": "2024-03-05", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction Visual Question Answering (VQA) has been widely investigated by researchers from various subfields in AI like computer vision and natural language understanding. As a result, we now have access to a vast corpus of VQA datasets coupled with numerous models addressing the task of VQA (Zou and Xie, 2020; Wu et al., 2017). Most existing VQA datasets (Johnson et al., 2017; Antol et al., 2015) have a collection of images paired with questions such that all information required to answer the question is provided in the image, and hence the scene is considered complete. But in real life, we often engage in tasks where scenes may not be completely visible. We instead may have world knowledge about various locations visited by us, acquired over time, that allows us to generate plausible answers to queries about objects we do not see in a scene. For example, in autonomous vehicle scenarios, reasoning is crucial for dealing with partial observability. Comprehensive knowledge of traffic enables the system to interpret limited visual information and make informed decisions, ensuring safe navigation despite occlusions or limited field of view. Furthermore, in factory settings, reasoning combined with background knowledge about the environment can assist teams of robots in dealing with partial observability during navigation and other coordination and cooperation tasks. In this paper, we introduce a synthetic dataset, CLEVR-POC1, for a reasoning-intensive VQA task set in partially observable scenarios involving external knowledge in the form of constraints. The dataset consists of pairs of an image, representing a partial scene (B in Figure 1a) in some environment (D1 in Figure 1a where the environment is defined by a set of constraints), and a question in natural language about some hidden/missing object (C in Figure 1a) in the scene. Although in the literature, there exist datasets for QA tasks in partially observable environments (e.g., CLEVR-dialog (Kottur et al., 2019), Visual Dialog (Das et al., 2017), Guess What? (De Vries et al., 2017)), these do not come with additional background knowledge. 1The source code associated with this research project is openly accessible at https://github.com/ savithasam88/CLEVR-POC/tree/master arXiv:2403.03203v1 [cs.AI] 5 Mar 2024 \f(a) The different components in VQA tasks. (b) The different VQA tasks are based on expected inputs and outputs and the number of agents involved. Figure 1: VQA task components and types of VQA tasks The challenge presented in CLEVR-POC necessitates the integration of knowledge and multi-step reasoning involving eliminative induction, into perception systems driven by learning. Given that the knowledge associated with a scene typically varies depending on the specific environment involved, it is not a constant across the dataset. It becomes challenging for a learning system to simply memorize this knowledge during training iterations. Moreover, because this knowledge is environment-specific, employing Large Language Models (LLMs) such as GPT as the source of knowledge, as demonstrated in some of the recent works like (Zhou et al., 2023) and (Shah et al., 2023), does not yield favorable results. We substantiate these assertions through empirical experiments. The contributions of this paper are as follows: \u2022 We introduce a dataset, CLEVR-POC, that introduces the task of reasoning-intensive VQA given a partial scene, the constraints (knowledge) to which the scene conforms and a question about a hidden object in the scene, find the set of all plausible answers. \u2022 We evaluate the performance of state-of-theart pre-trained vision language and large language models on CLEVR-POC. \u2022 We demonstrate that the synergistic use of LLMs alongside a visual perception network and a formal reasoning system with access to external knowledge can efficiently and effectively address the challenges presented by CLEVR-POC. The organization of the paper is as follows. Section 2 provides an overview of existing work in VQA, focusing on various VQA datasets and briefly discussing LLM for reasoning. Section 3 delves into the detailed process of generating CLEVR-POC, while Section 4 outlines the research questions explored in this study. Additionally, this section presents the experiments conducted on CLEVRPOC and the corresponding results. 2. Related Work In this section, we provide an overview of research in two domains datasets in VQA and LLMs and reasoning. 2.1. Datasets in VQA A VQA task may involve various combinations of the different components shown in Figure 1a a complete scene (A), a partial scene (B), a question (C) about the scene, external knowledge in the form of rules/constraints (D1), or facts in knowledge graphs (D2), and the set of plausible answers to the question (E). Each combination results in a different VQA task (see Figure 1b). 2.2. Types of VQA Tasks 2.2.1. Task 1 Given a complete scene, and a question about an object in the scene, find the answer to the question. Since the scene is complete, the agent can come up with the exact answer implying that the solution set E has just one element (|E| = 1). DAQUAR (Malinowski and Fritz, 2014), VQA (Antol et al., 2015), CLEVR (Johnson et al., 2017) are datasets in this category. 2.2.2. Task 2 Given a complete scene, a question about one of the objects in the scene and external knowledge about objects (in the form of triples D2), find the answer to the question leveraging this external knowledge. FVQA (fact-based VQA) (Wang et al., 2017), and KVQA (knowledge aware VQA) (Shah et al., 2019) are datasets in this category. \f2.2.3. Task 3 While the above two tasks involve a single agent being posed with a scene and a question, this category of VQA tasks involves more than one agent. One of the agents has access to the complete scene while the other agent is provided with a partial scene and a question. Answering the question requires the agents to interact with each other. CLEVR-dialog (Kottur et al., 2019), Visual Dialog (Das et al., 2017), Guess What? (De Vries et al., 2017) are datasets handling Task 3. 2.2.4. Task 4 (CLEVR-POC) Given a partial scene, knowledge in the form of rules (constraints) about the environment to which the scene conforms and a question about the hidden object in the scene, find the set of all plausible answers to the question. Since the question is about a hidden object (for example, about the shape of the object), it may not be always possible to provide an exact solution. Answering the question is more about eliminating all cases that are inconsistent with the background knowledge (for example: given the knowledge there are no spheres in this environment) and returning all consistent answers as the solution (the shape is a cone or a cylinder or a cube, which is why |E| \u22651). In contrast to Task 2, where the knowledge graph encompasses general world facts (e.g.,\u201ccows are herbivores\u201d), the knowledge in this context is considerably more specific to an environment. While an LLM can be presumed to possess awareness of the former category of knowledge, the same cannot be said for the latter. 2.3. LLMs and Reasoning In this paper, our emphasis lies on the process of reasoning which depends on a formal system grounded in logical rules and principles. Such a system ensures that all transformations or manipulations of symbols within it, leading to new inferences, adhere to the logical rules and principles governing the system (MacColl, 1897). While LLMs can also be seen as performing symbolic manipulations, these manipulations unlike traditional symbolic reasoning are based on statistical associations or patterns learned from data (Huang and Chang, 2023), because of which it may or may not be logically sound. Despite the progress in the development of large language models (LLMs), many still struggle with a deep understanding of symbols like humans do (Abraham and Alirezaie, 2022; Yao et al., 2022). To address this gap, there are ongoing efforts to create benchmarks (Huang and Chang, 2023), like the proposed CLEVR-POC, to evaluate the reasoning capabilities of LLMs. In CLEVR-POC, we introduce a VQA task that involves constraint-based reasoning, a form of logical reasoning, where the generated response must satisfy a set of constraints given. These benchmarks are used to assess the capacity of language models in handling symbolic reasoning, contributing to the advancements in the development of more logically sound systems. 3. The CLEVR-POC Dataset Now we describe in detail the generation of the CLEVR-POC dataset. The dataset, as the name suggests, is based on the CLEVR (Johnson et al., 2017) dataset, which generated scenes with geometrical shapes. Each object is associated with four attributes color, shape, material, and size. The objects in CLEVR-POC can have one of the four shapes cone, sphere, cylinder, and cube, three sizes large, medium, and small, two materials rubber and metal, and eight colors red, blue, green, yellow, gray, brown and purple. Besides these four attributes, since a scene is divided into four regions (see Figure 1a), CLEVR-POC also associates an object with the region it is in 0, 1, 2 or 3. Each object belongs to exactly one region. Division of a scene into regions enables the specification of constraints at multiple levels. \u2022 Region-based constraints for example, all objects in Region 0 are of shapes cube or cylinder. These constraints must be satisfied by objects in the corresponding region. \u2022 Across-region constraints for example, the total number of objects sharing the same color in regions 1 and 2 is not more than 2. These are constraints specified across two regions. \u2022 Generic constraints for example, there are at least two cubes in the scene. These constraints apply to the whole scene. One of the major points of distinction in the scene generation process of CLEVR-POC from the original CLEVR is that the scenes in CLEVR-POC are generated such that they conform to a chosen set of constraints. The steps in creating an instance i in the dataset are: \u2022 Generating an environment Environmenti, defined by a set of constraints. \u2022 Generating a complete scene graph, Completei, that conforms to Environmenti. \u2022 Generating the partial scene graph, Partiali by removing one of the objects, Obji, from Completei. \u2022 Generating a question, Qi, about the partial scene with object of interest Obji. \fTemplate-1 (Value Restriction) :object(X),at(X, R\u2019), not hasProperty(X, P1\u2019, V1\u2019). Translation All objects at region R\u2019 have value V1\u2019 for the property P1\u2019. An instantiation :object(X),at(X, 0), not hasProperty(X, color, red). Template-2 (Negation Constraint) :object(X), at(X, R\u2019), hasProperty(X, P1\u2019, V1\u2019). Translation All objects at region R\u2019 cannot have value V1\u2019 for the property P1\u2019. An instantiation :object(X), at(X, 0), hasProperty(X, material, metal). Template-3 (Exactly N Constraint) :#count{X: hasProperty(X, P1\u2019, V1\u2019), object(X), at(X, R\u2019)}!=N\u2019 Translation There are exactly N\u2019 objects at region R\u2019 with value V1\u2019 for the property P1\u2019. An instantiation :#count{X: hasProperty(X, size, small), object(X), at(X, R\u2019)}!=2 Template-4 (Atleast N Constraint) :#count{X1, X2: sameProperty(X1, X2, P1\u2019), object(X1), object(X2), at(X1, R1\u2019), at(X2, R2\u2019)} < N\u2019. Translation There are at least N \u2032 pairs of objects at regions R1\u2019 and R2\u2019 that has the same value V1\u2019 for the property P1\u2019. An instantiation :#count{X1, X2: sameProperty(X1, X2, shape), object(X1), object(X2), at(X1, 1), at(X2, 2)}<1. Template-5 (OR Constraint) :object(X), at(X, R\u2019), not hasProperty(X, P1\u2019, V1\u2019), not hasProperty(X, P1\u2019, V2\u2019). Translation All objects in region R\u2019 have value V1\u2019 for property P1\u2019 or V2\u2019 for property P2\u2019. An instantiation :object(X), at(X, 1), not hasProperty(X, color, yellow), not hasProperty(X, color, blue). Table 1: A few constraint templates 3.1. Environment Representation An environment in CLEVR-POC is defined by a set of constraints. We provide a set of 11 constraint templates with CLEVR-POC that are expressed in answer set programming (ASP)2. Each environment is created by at most 15 different instantiations of these templates, provided there are at least two constraints associated with each region. A few example constraint templates with their translation in English and an instantiation are shown in Table 1. Around 30 different environments are generated (see Appendix A for an example) and the scenes in the dataset belong to one of these 30 environments the dataset generation process ensures that the scenes are uniformly distributed across the 30 environments. 3.2. Scene Representation CLEVR represented a scene in the form of a scene graph whose nodes represented objects annotated with its attributes and edges denoted the spatial relations (left, right, front, behind) between objects. In CLEVR-POC, besides the scene graph representation, we also represent a scene in ASP. Below we show part of the ASP representation of the partial scene in Figure 1a. %Objects in the scene object(0). object(1). object(2). object(3). %Attributes of objects 2ASP is a declarative programming paradigm applied to solve complex search problems (Lifschitz, 2008) at(0, 2). hasProperty(0, color, green). hasProperty(0, size, large). hasProperty(0, material, rubber). hasProperty(0, shape, cylinder). .... %Spatial relations between objects front(1, 0). right(1, 0). ... The predicate object is used to define the different objects (denoted using identifiers 0, 1, ..). hasProperty(Object, Attribute, Value) associates a Value for an Attribute of an Object. at(Object, Region) represents the region where the object is located. The spatial relations between objects are represented with predicates left, right, front, behind for example, left(Object1, Object2) represents that Object2 is located left of Object1. 3.3. Image Generation While the images in CLEVR are generated from a randomly sampled scene graph, CLEVR-POC generates its images from scene graphs known to adhere to constraints defining an environment. Scene graph creation is thus a reasoning problem given an environment (constraints in ASP) and a desired number of objects (n) in the scene, the goal is to assign each object to one of the four regions and propose values to color, size, shape, and material that are consistent with the constraints in the environment. An ASP reasoning engine solves this problem each answer (a consistent property\f(a) Pipeline for generating environment and complete scenes in that environment. (b) Pipeline for generating partial scenes, and questions and then labeling them with answers. Figure 2: Two steps in dataset generation process: Figure 2a shows the first step environment generation from constraint templates and generating complete scenes satisfying these constraints. Figure 2b shows Step 2 partial scene and question generation from a complete scene. value assignment for the n objects) in the answer set returned is a scene graph or a possible configuration of the objects in the scene. Since there are many possible configurations we randomly sample a million of these scene graphs for the subsequent image generation phase. A scene graph is then rendered using Blender3. The image representing the partial scene is generated from a partial scene graph constructed from the actual scene graph by randomly removing one of the objects from it. Figure 2a shows the scene graph construction process. 3.4. Question Representation The questions in CLEVR-POC query about one of the four attributes color, size, shape, and material of the missing/hidden object in the partial scene. Besides representing the questions using an equivalent functional program as in CLEVR, CLEVRPOC also represents it in ASP. An example question and its ASP form are shown below: Question: What is the color of the other cylinder that is the same material as the medium red thing? query(Q):hasProperty(X,color,Q), hasProperty(X,shape,cylinder), hasProperty(Y,size,medium), hasProperty(Y,color,red), same material(Y,X), X!=Y. If the query is about attribute A, A \u2208 {color, size, material, shape}, the questions are generated such that the cardinality of the set of possible solutions (S) is 1 \u2264|S| < |A|, where |A| is 3https://www.blender.org/ the set of all values for the attribute A (for example |size| = 3 = |{large, medium, small}|). If the question generated has |A| solutions (for instance, a solution like, \u2018size is large or small or medium\u2019 is true for any question), it is considered invalid. The questions are balanced across the question types (that depend on the query attribute see Appendix B for the distribution). It should be noted that the solution space of CLEVR-POC questions is 16 times that of CLEVR as the solutions expected are not always a single value, but a set of values. 3.5. Question Generation The question in CLEVR-POC is generated from the question templates available in CLEVR. We avoid the yes/no (existence, comparison) and counting questions and focus on just the attribute querying templates. An example template is as follows: What shape is the < Z2 > (size) < C2 > (color) < M2 > (material) [that is] < R > (relation) the < Z > (size) < C > (color) < M > (material) < S > (shape)? Question template instantiation is done based on the complete scene graph of the associated image. The object of interest is always the object that is removed from the complete scene to generate the partial scene graph. The query attribute is picked such that it satisfies the question type balancing requirements. The known attributes of the query object (filling the slots < Z2 > or < C2 > or < M2 > in the above template) are randomly selected. While the filler for the slot < R > (one of the left, right, front, behind) is randomly picked, the reference object in the query is picked based on the spatial relations available in the complete scene \fpicking one of the objects that are in < R > relation of the query object. The ASP representations of the question, the incomplete scene, and the constraints in the environment are given to an ASP solver to identify the set of possible values for the query attribute. Figure 2b shows the pipeline of question generation. Refer to Appendix A and B for a detailed example and statistics of CLEVR-POC. 4. Experiments The experiments are designed to answer the following research questions (RQ): \u2022 RQ1: How do neural-based vision language models perform on reasoning-intensive VQA tasks (with an emphasis on symbolic knowledge representation and reasoning)? \u2022 RQ2: How well do neuro-symbolic vision language architectures handle reasoningintensive VQA tasks (in the context of mapping raw inputs to symbolic space)? \u2022 RQ3: How can we leverage LLMs in reasoning-intensive VQA tasks and what are the challenges associated with it? In the sections following, we describe the methods implemented to answer these questions. 4.1. Methods 4.1.1. CLIP-based model CLIP (Contrastive Language Image Pre-training) (Radford et al., 2021) is a vision-language model that is trained to align pairs of text and images to a unified space. We experimented with the CLIP model to investigate RQ1. Figure 3 shows the architecture of a CLIP-based model to solve CLEVR-POC. The pre-trained vision transformer (ViT-B/32) and the text encoders (masked selfattention) in CLIP are leveraged to obtain encodings for the incomplete scene and the question. The encoding for the environment is obtained from its constraints. A pre-trained GPT-2 (Radford et al., 2019) model is used to encode the constraints. As GPT-2 is more language-oriented, we input the natural language version of ASP constraints (while experimenting with ASP-form constraints to assess their impact on performance). The problem is formulated as a multi-label classification problem where the output is one or more of the following 17 labels {red, blue, green, yellow, cyan, brown, gray, purple, rubber, metal, large, small, medium, cone, cube, cylinder, sphere}. Hence, the three encodings are passed to a multilabel classifier (feed-forward network) which is the only module of the whole model that is trained from scratch. The classifier is trained with a weighted binary cross entropy loss function (Ho and Wookey, 2019) that gives more penalty to the wrong prediction of minority class (as most of the labels in the output are 0, except for the ones in the answer a false negative is given more penalty). For each of the 17 labels, the weighted cross entropy loss is thus defined as below: WCE (y, \u02c6 y) = \u2212(\u03b2y log(\u02c6 y) + (1 \u2212y) log(1 \u2212\u02c6 y)) (1) \u03b2 is the weight (is set > 1 to penalize false negatives)4, y is the ground truth, \u02c6 y is the prediction. 4.1.2. Neuro-Symbolic Visual Question Answering The architecture for the neuro-symbolic approach to solving CLEVR-POC task is shown in Figure 4. The idea is to convert both the image and the question into a unified space as in CLIP, with the difference that this space is symbolic (scene graph and question in ASP). The architecture is based on the state-of-the-art neuro-symbolic approach on the CLEVR dataset, NS-VQA (Yi et al., 2018) and will be used here to study aspects of RQ2. We modify this architecture to include an ASP solver that takes as input the scene in ASP, the question in ASP, and the environment constraints in ASP to derive the answer to the question. The question parser, (a Bidirectional Long Short Term Memory (BiLSTM) sequence to sequence model) is trained as in NS-VQA using REINFORCE the reward is positive if the ASP program generated by the parser results in the correct answer, else it is 0. The question parser is initially pre-trained in a fully supervised way with a small sample of (question, ASP program) pairs. The image perception network in NS-VQA is based on Detectron (Girshick et al., 2018) and it was trained independently of the question parser in a supervised way. The ASP solver used is the same as the one used during the dataset generation phase. 4.1.3. LLMs for solving CLEVR-POC LLMs are leveraged in two ways for solving a reasoning task like CLEVR-POC. LLM as question parser in NS-VQA: In this approach, we use LLM as a question parser converting the question into a semantic representation like ASP. The image is converted to a scene graph as done in NS-VQA. Both semantic representations are then passed on to a formal reasoner like an ASP solver to derive solutions consistent with the constraints. Stand-alone LLM: The second approach is to provide both the image description and the question along with the constraints (in NL) as input to LLM and generate as a response the consistent solutions. We, here, assume as done in NS-VQA that 4The results in Section 4.3 are for \u03b2 = 5. \fFigure 3: CLIP for CLEVR-POC Figure 4: NS-VQA for CLEVR-POC architecture is updated with an ASP solver the scene graphs are accurate, as our focus is on evaluating LLMs\u2019 ability to perform symbolic reasoning. CLEVR-POC, a synthetic dataset where environment-specific knowledge is not fixed, can assess LLMs\u2019 symbolic reasoning ability without data contamination (where the dataset becomes unusable once it has been exploited). The LLM used in the experiments is GPT-4 (OpenAI, 2023) (See Appendix C for details about prompts used). 4.2. Evaluation Let A = {a1, a2,..} denote the set of values in the actual answer and P = {p1, p2,..} denote the predicted answer set. We evaluate the performance of the two approaches on CLEVR-POC using the two metrics based on accuracy. Exact Accuracy checks whether the prediction made is exactly accurate, i.e., A is exactly equal to P. Exact Accuracy(A, P) = ( 1 if x \u2208A \u21d0 \u21d2x \u2208P 0 otherwise (2) Jaccard Index computes the similarity between the actual answer and predicted answer sets as: Jaccard Index (A, P) = |A \u2229P| |A \u222aP| (3) The value of Jaccard Index is between 0 (no common elements) and 1 (exact match). It gives some credit for partially correct answers as well. 4.3. Results Tables 2a and 2b show the results for exact and partial answer accuracies respectively for NS-VQA, CLIP-based models, and stand-alone GPT-4 on CLEVR-POC. While NS-VQA (BiLSTM) uses a BiLSTM trained from scratch as the question parser, NS-VQA (GPT-4) uses pre-trained GPT4 as the question parser. We experimented with varying dataset sizes 2000, 6000, and then 12000 instances. 5 It can be seen that with a multifold increase in the dataset size, there is an improvement in the answer accuracy, but the performance is not satisfactory. RQ1 CLIP-based model analysis: Since the question is not about some object in the scene, and the set of constraints to be satisfied is also not fixed across the instances in the dataset, it is challenging to learn a mapping from the three inputs (the incomplete scene, the natural language question, and the constraints) to the output set of plausible values. Table 2 shows three sets of results for CLIP. The columns CLIP-NL and CLIP5The models are trained on Intel\u00ae CoreTM i712700K, 32GB RAM, and an NVIDIA GeForce RTX 3080 Ti for training. \fDataset NS-VQA (BiLSTM) NS-VQA (GPT-4) CLIP-ASP CLIP-NL CLIP (no knowledge) GPT-4 2000 0.0200 0.9250 0.0350 0.0600 0.0500 0.4626 6000 0.1516 0.9550 0.1500 0.1700 0.1183 12000 0.2308 0.9441 0.1800 0.2283 0.1483 (a) Exact answer accuracies of CLIP, NS-VQA and GPT-4 models on CLEVR-POC. Dataset NS-VQA (Bi-LSTM) NS-VQA (GPT-4) CLIP-ASP CLIP-NL CLIP (no knowledge) GPT-4 2000 0.0591 0.9287 0.1000 0.1557 0.1412 0.5164 6000 0.3602 0.9578 0.3100 0.3403 0.2447 12000 0.4331 0.9496 0.3600 0.4465 0.2912 (b) Jaccard Index of CLIP, NS-VQA and GPT-4 models on CLEVR-POC Table 2: Exact accuracies and Jaccard index scores of NS-VQA with BiLSTM and GPT-4 as question parsers, CLIP and GPT-4 on CLEVR-POC. CLIP-NL and CLIP-ASP take constraints in natural language and ASP, respectively. CLIP (no knowledge) is the performance of CLIP without constraints. Sample Size PA (after pre-training) PA (after REINFORCE) PA (GPT-4) 28 (prompt size) 0.9250 \u2248200 0.0512 0 \u22481000 0.4487 0.0366 \u22482000 0.5043 0 Table 3: Drop of program accuracies (PA) after REINFORCE and the performance of GPT-4 provided with just 28 (question, ASP program) pairs as prompt. ASP correspond to instances of CLIP where the constraints are given in natural language and ASP respectively. It should be noted that CLIP-NL performs better than CLIP-ASP, suggesting that representing symbolic knowledge in natural language may be ideal while incorporating knowledge into neural frameworks for QA. The performance of CLIP on CLEVR-POC when no external knowledge is provided is shown in the column CLIP (no knowledge). Although without the external knowledge CLIP\u2019s performance drops, there is not much of a difference indicating that we need to consider better techniques for incorporating such symbolic constraints into neural frameworks. This points us toward existing neuro-symbolic frameworks. RQ2 NS-VQA analysis: While neural models failed in symbolic reasoning and incorporating symbolic knowledge into the network, it can be seen that the major challenge faced by neurosymbolic architectures lies not in reasoning but in mapping image or question to a symbolic representation in the absence of ground truth semantic representations. In our experiments, we focus on language perception while assuming 100% accuracy in image perception. Tackling both perceptions simultaneously is even more formidable without access to ground truth representations. Hence, the poor performance of NS-VQA (see column NSVQA (BiLSTM) in Tables 2a and 2b) can be solely attributed to the failure of REINFORCE in learning accurate ASP programs. As mentioned in Section 4.1.2, a BiLSTM is initially pre-trained in a supervised fashion with a few examples. We experimented by varying the number of examples provided for pre-training. Table 3 shows the program accuarcy after pre-training with around 200, 1000 and 2000 pairs of . When these pre-trained models are further trained with REINFORCE, there is a drastic drop in the program accuracy as the focus of the REINFORCE algorithm is on coming up with the correct answer independent of the program\u2019s accuracy. This fall is observed even with the original CLEVR dataset. The chances of deriving the correct answer even with a wrong program by a fluke are higher in the case of CLEVR compared to CLEVR-POC considering the larger solution space of CLEVRPOC (see Section 3.4). REINFORCE clearly fails to learn ASP programs through weak supervision even when it initiates its training from a proficient model. RQ3 LLM Analysis: In the first experiment we used GPT-4 as a question parser. The BiLSTM-based question parser of NS-VQA is replaced with GPT-4 (the results are shown in column NS-VQA(GPT-4) in Tables 2a and 2b). The model is provided with just 28 (question, ASP program) pairs of examples as prompts. GPT-4 with no fine tuning was able to accurately predict the equivalent ASP programs. The stand-alone GPT-4 approach gave less than 50% exact accuracy. The evidence indicates that employing GPT-4 as a question parser to translate the question into an ASP program and subsequently utilizing an ASP reasoning engine leads to better results compared to placing the entire burden of symbolic reasoning on GPT-4. It should also be noted that GPT-4 with no data-specific training performed better than CLIP and NS-VQA (BiLSTM). There is still room for improvement with some fine-tuning. \f5. Discussion We now discuss important challenges that our dataset and work point to. Reasoning and LLM: The experiments showed that the direct application of LLMs is not a good solution for such reasoning-intensive tasks. (Mahowald et al., 2023) also discusses the limitations of LLMs in formal reasoning tasks. Our experiments showed that a more appropriate approach to harnessing LLMs involved relieving them of the task of symbolic reasoning and instead employing them for generating symbolic representations. Progressing even further entails discovering mechanisms for seamlessly incorporating specific knowledge into LLMs and generating responses that are consistent with this knowledge. Symbolic knowledge in visual perception network: Although the focus of this paper was on language and reasoning, it may be noted that knowledge in the form of constraints in CLEVRPOC can play a significant role during image perception as it can provide hints on what can or cannot be in the image. This is a form of weak supervision which is also required in the absence of ground truth scene graphs to accelerate the learning process. Developing neuro-symbolic models with a stronger feedback mechanism for visual perception, such as DeepProbLog (Manhaeve et al., 2018), NeurASP (Yang et al., 2020), Semantic-Loss (Xu et al., 2018) and LTN (Serafini and Garcez, 2016)), would help in faster convergence. The aforementioned frameworks, however, cannot still be applied to VQA tasks due to scalability issues. 6." + }, + { + "url": "http://arxiv.org/abs/1910.05113v2", + "title": "Fairness in Clustering with Multiple Sensitive Attributes", + "abstract": "A clustering may be considered as fair on pre-specified sensitive attributes\nif the proportions of sensitive attribute groups in each cluster reflect that\nin the dataset. In this paper, we consider the task of fair clustering for\nscenarios involving multiple multi-valued or numeric sensitive attributes. We\npropose a fair clustering method, \\textit{FairKM} (Fair K-Means), that is\ninspired by the popular K-Means clustering formulation. We outline a\ncomputational notion of fairness which is used along with a cluster coherence\nobjective, to yield the FairKM clustering method. We empirically evaluate our\napproach, wherein we quantify both the quality and fairness of clusters, over\nreal-world datasets. Our experimental evaluation illustrates that the clusters\ngenerated by FairKM fare significantly better on both clustering quality and\nfair representation of sensitive attribute groups compared to the clusters from\na state-of-the-art baseline fair clustering method.", + "authors": "Savitha Sam Abraham, Deepak P, Sowmya S Sundaram", + "published": "2019-10-11", + "updated": "2020-01-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "INTRODUCTION Clustering is the task of grouping a dataset of objects in such a way that objects that are assigned to the same group, called a cluster, are more similar to each other than those in other groups/clusters. Clustering [10] is a well-studied and fundamental task, arguably the most popular task in unsupervised or exploratory data analytics. A pragmatic way of using clustering within analytics pipelines is to consider objects within the same cluster as being indistinguishable. Customers in the same cluster are often sent the same promotional material in a targeted marketing scenario in retail, whereas candidates clustered using their resumes may be assigned the same shortlisting decision in a hiring scenario. Clustering provides a natural way to tackle the infeasibility of doing manual per-object assessment or appreciation, especially in cases where the dataset in question encompasses more than a few hundreds of objects. Central to clustering is the notion of similarity which may need to be defined in a task-oriented manner. As an example, clustering to aid a task on identifying tax defaulters may use a similarity measure that focuses more on the job and income related attributes, whereas that for a health monitoring task may more appropriately focus on a very different set of attributes. Usage of clustering algorithms in analytics pipelines for making important decisions open up possibilities of unfairness. Among two high-level streams of fairness constructs, viz., individual fairness [8] and group fairness [12], we focus on the latter. Group fairness considers fairness from the perspective of sensitive attributes such as gender and ethnicity and groups defined on the \u00a9 2020 Copyright held by the owner/author(s). Published in Proceedings of the 23rd International Conference on Extending Database Technology (EDBT), March 30-April 2, 2020, ISBN 978-3-89318-083-7 on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-nd 4.0. basis of such sensitive attributes. Consider a clustering algorithm that targets to group records of individuals to clusters; it is possible and likely that certain clusters have highly skewed distributions when profiled against particular sensitive attributes. As an example, clustering a dataset with broad representation across gender groups based on exam scores could lead to clusters that are highly gender-homogeneous due to statistical correlations1; this would happen even if the gender attribute were not explicitly considered within the clustering, since such information could be held implicitly across one or more other attributes. Choosing a cluster with a high gender or ethnic skew for positive (e.g., interview shortlisting) or negative (e.g., higher scrutiny or checks) action entails differentiated impact across gender and ethnic groups. This could also lead to reinforcement of societal stereotypes. For example, consistently choosing individuals from particular ethnic groups for pro-active surveillance could lead to higher reporting of violations from such ethnic groups since enhanced surveillance translates to higher crime visibility, resulting in higher reporting rates. This reporting skew results thus manifests as a data skew which provides opportunities for future invocations of the same analytics to exhibit higher ethnic asymmetry. In modern decision making scenarios within a liberal and democratic society, we need to account for a plurality of sensitive attributes within analytics pipelines to avoid causing (unintended) discrimination; these could include gender, race, religion, relationship status and country of origin in generic decision-making scenarios, and could potentially include attributes such as age and income in more specific ones. There has been much recent interest in the topic of fair clustering [1, 6]. Our Contribution. In this paper, we consider the task of fair clustering in the presence of multiple sensitive attributes. As will be outlined in a later section, this is a direction that has received less attention amidst existing work on fair clustering that has been designed for scenarios involving a single binary sensitive attribute [3, 6, 14, 17], single multi-valued/categorical sensitive attribute [1, 20\u201322], or multiple overlapping groups [4]. We propose a clustering formulation and algorithm to embed group fairness in clustering that incorporates multiple sensitive attributes that may include numeric, binary or multi-valued (i.e., categorical) sensitive ones. Through an empirical evaluation over multiple datasets, we illustrate the empirical effectiveness of our approach in generating clusters with fairer representations of sensitive groups, while preserving cluster meaningfulness. 2 RELATED WORK Fairness in machine learning has received significant attention over the last few years. Our work contributes to the area of fair methods for unsupervised learning tasks. We now briefly survey the area of fairness in unsupervised learning, with a focus 1https://www.compassprep.com/new-sat-has-disadvantaged-female-testers/ arXiv:1910.05113v2 [cs.LG] 24 Jan 2020 \fon clustering, our task of interest. We interchangeably refer to groups defined on sensitive attributes (e.g., ethnic groups, gender groups etc.) as protected classes for consistency with terminology in some referenced papers. Towards developing a systematic summary, we categorize prior work into three types depending on whether the fairness modelling appears as a (i) pre-processing step, (ii) during the task of clustering, or (iii) as a post-processing step after clustering. These yield the three technique families. 2.1 Space Transformation Approaches The family of fairness pre-processing techniques work by first representing the data points in a \u2018fair\u2019 space followed by applying any existing clustering algorithm upon them. This is the largest family of techniques, and most approaches in this family seek to achieve theoretical guarantees on representational fairness. This family includes one of the first works in this area of fair clustering [6]. The work proposes a fair variant of classical clustering for scenarios involving a single sensitive attribute that can take binary values. Let each object be colored either x or y depending on its value for the binary sensitive attribute. [6] defines fairness in terms of balance of a cluster, which is min{#x/#y, #y/#x}. They go on and outline the concept of (b,r)-fairlet decomposition, where the points in the dataset are grouped into small clusters called fairlets, such that each fairlet would have a balance of b/r, where b < r. The clustering is then performed on these fairlets. The fairness guarantees that are provided by the clustering is based on the balance in the underlying fairlets. Fairlet decomposition turns out to be NP-hard, for which an approximation algorithm is provided. Later, [3] proposed a faster fairlet decomposition algorithm offering significant speedups. The work in [20] extends the fairlet idea to K-means for scenarios with a single multi-valued sensitive attribute. They define fair-coresets which are smaller subsets of the original dataset, such that solving fair clustering over this subset also results in giving an approximate solution for the entire dataset. Another set of fair space transformation approaches build upon methods for dimensionality reduction and representation learning. A recent such work [2] considers the bias in the dataset (that is, representational skew) as a form of noise and uses spectral de-noising techniques to project the original points in the dataset to a new fair projected space. [17] describes a fair version of PCA (Principal Component Analysis) for data with a single binary sensitive attribute. A representation learning method may be defined as fair if the information about the sensitive attributes cannot be inferred from the learnt representations. The method uses convex optimization formulations to embed fairness within PCA. The fairness is specified in terms of failure of the classifiers in predicting the sensitive class of the dimensionality-reduced data points obtained from fair PCA. The method is fairer if the data points are less distinguishable with respect to their values of the sensitive attribute in this lower dimension space. Another work that projects the original data points into a fair space is the one described in [21]. This method, which is for cases involving a single multi-valued sensitive attribute, defines a clustering to be fair when there is an equal number of data points of each protected class in each cluster. They define the concept of fairoids (short for fair centroids) which are formed by grouping together all points belonging to the same protected class. The task is then to learn a latent representation of the data points such that the cluster centroids obtained after clustering on this latent representation are equi-distant from every fairoid. 2.2 Fairness Modelling within Optimization Methods in this family incorporate the fairness constraints into the clustering step, most usually within the objective function that the clustering method seeks to optimize for. It may be noted that the method we propose in this paper, FairKM, also belongs to this family. Approaches within this family define a clustering to be fair if the proportional representation of the protected class in a cluster reflects that in the dataset. One of the techniques [14] describes a fair variant of spectral clustering where a linear fairness constraint is incorporated into the original ratio-cut minimization objective of spectral clustering. Another recent technique [22], the method that comes closest to ours in construction, modifies K-means clustering to add a fairness loss term. The fairness loss is computed as the KL-divergence between the probability distribution across the different values for the sensitive attribute in a cluster, and the corresponding distribution for the whole dataset. This method is designed for a single multi-valued sensitive attribute and does not generalize to multiple such sensitive attributes. Being closest to our proposed method in spirit, we use this method as our primary baseline, in the experiments. In contrast to the above, another recent work [5] outlines a different notion of fairness, one that is independent of (and agnostic to) sensitive attributes. They define fairness as proportionality wrt spatial distributions, to mean that any (n/k) points can form their own cluster if there exists another center that is closer to each of these (n/k) points. This proportionality constraint is incorporated into the objective function of k-median clustering and is optimized to find a clustering that satisfies this constraint. 2.3 Cluster Perturbation Techniques In this third family of techniques, vanilla clustering is first applied on the original set of data points, after which the generated clusters are perturbed to improve fairness of the solution. In [4], fairness is defined in terms of a lower and upper bound on the representation of a protected class in a cluster. This method is for cases with multiple binary sensitive attributes, referred to as overlapping groups in the paper. The k-centers generated from vanilla clustering on the data points are used to perform a fair partial assignment between points and the centers. The fair partial assignment is formulated as a linear program with constraints that ensures that the sum of the weights associated with a point\u2019s partial assignments is one, and, the representation of a protected class in a cluster is within the specified upper and lower bounds. The partial assignments are then converted to integral assignments by framing it as another linear programming problem. [1] also uses a similar idea, but it just enforces an upper bound, consequently preventing over-representation of specific groups in a cluster. The work described in [13] proposes a simple approximation algorithm for k-center clustering under a fairness constraint, for scenarios with a single multi-valued sensitive attribute. The method targets to generate a fair summary of a large set of data points, such that the summary is a representative subset of the original dataset. For example, if the original dataset has a 70:30 male:female distribution, then a fair summary should also have the same distribution. 2.4 Summary Table 1 summarizes the different approaches in literature and our proposed approach FairKM, in terms of the number and type of sensitive attributes they handle and their definition for fairness. As it may be seen from the table, there has been very limited \fPaper Number Type Fairness Definition [6],[3],[2] Single Binary Preserve proportional representation of protected classes within clusters. [20] Single Multi-valued Preserve proportional representation of protected classes within clusters. [17] Single Binary The accuracy of the classifier predicting the protected class of a data point should be within a specified bound. [21] Single Multi-valued Each cluster should have an equal number of data points from each protected class. [4] Multiple Binary The proportional representation of a protected class in a cluster should be within the specified lower and upper bounds. [1] Single Multi-valued The proportional representation of a protected class in a cluster should not go beyond a specified upper bound. [13] Single Multi-valued The clustering should produce pre-specified number of cluster centers belonging to each specific protected class. [14], [22] Single Multi-valued Preserve proportional representation of protected classes within clusters. [5] There are no set of (n/k) points such that there exists another center that is closer to each of these (n/k) points. [18] Single Multi-valued Each cluster should have atleast a pre-specified number of points of a protected class. FairKM Multiple Multi-valued/ Preserve proportional representation of protected classes within clusters. Numeric as its representation in the whole dataset. Table 1: Fair Unsupervised ML Methods indicating the Number and Type of Sensitive Attributes they are designed for. exploration into methods that admit multiple multi-valued (aka categorical or multi-state) sensitive attributes, the space that FairKM falls in. While multiple multi-valued attributes can be treated as together forming a giant multi-valued attribute taking values that are combinations of the component attributes, this results in a large number of very fine-grained groupings. These make it infeasible to both (i) impose fairness constraints over, and (ii) ensure parity in treatment of different sensitive attributes independent of the differences in the number of values they take. Considering the fact that real-life scenarios routinely present with multiple sensitive attributes, FairKM, we believe addresses an important line of inquiry in the backdrop of the literature. We will empirically evaluate FairKM against [22], the latter coming from the same technique family and having similar construction. 3 PROBLEM DEFINITION Let X = {. . . ,X, . . .} be a dataset of records defined over two sets of attributes N and S. N stands for the set of attributes that are relevant to the task of interest, and thus may be regarded non-sensitive. As examples, this may comprise experience and skills in the case of screening applicants for a job application, and attributes from medical wearables\u2019 data to inform decision making for pro-active screening. S stands for the set of sensitive attributes, which would typically include attributes such as those identifying gender, race, religion, relationship status in a citizen database and any other sensitive attribute over which fairness is to be ensured. In other scenarios such as NLP for education, representational fairness may be sought over attributes such as types of problems in a word problem database; this forms one of the scenarios in our empirical evaluation. The (vanilla) clustering objective would be to group X into clusters such that it maximizes intra-cluster similarity and minimizes inter-cluster similarity, similarity gauged using the taskrelevant attributes in N. Within a fair clustering, we would additionally like the output clusters to be fair on attributes in S. A natural way to operationalize fairness within a clustering that covers all objects in X would be to ensure that the distribution of groups defined on sensitive attributes within each cluster approximates the distribution across the dataset; this correlates with the well-explored notion of statistical parity [8] in fair supervised learning. For example, suppose the sex ratio in X is 1:1; we would ideally like each cluster in the clustering output to report a sex ratio of 1:1, or very close to it. In short, we would like a fair clustering to produce clusters, each of which are both: \u2022 coherent when measured on the attributes in N, and \u2022 approximate the dataset distribution when measured on the attributes in S. It may be noted that simply hiding the S attributes from the clustering algorithm does not suffice. A gender blind clustering algorithm may still produce clusters that are highly genderhomogeneous, since some attributes in N could implicitly encode gender information. Indeed, we would like a fair clustering to surpass S-blind clustering by significant margins on fairness. 4 FAIRKM: OUR METHOD We now describe our proposed technique for fair clustering, codenamed FairKM, short for Fair K-Means, indicating that it draws inspiration from the classical K-Means clustering algorithm [9, 16]. FairKM incorporates a novel fairness loss term that nudges the clustering towards fairness on attributes in S. The FairKM objective function is as follows: O = \u00d5 C \u2208C \u00d5 X \u2208C distN(X,C) | {z } K-Means Term over attributes in N +\u03bb deviationS(C, X) | {z } Fairness Term over attributes in S (1) As indicated, the objective function comprises two components; the first is the usual K-Means loss for the clustering C, distN(X,C) computing the distance between X and prototype of cluster C, distance computed only over attributes in N. The second is a fairness loss term we introduce, which is computed over attributes in S. \u03bb is a parameter that may be used to balance the relative strength of the two terms. As in K-Means, this loss is computed over a given clustering; the task is thus to identify \fa clustering that minimizes O as much as possible. We now describe the details of the second term, and the intuitions behind its construction. 4.1 The Fairness Term in FairKM While the K-Means term in the FairKM objective tries to ensure that the output clusters are coherent in the N attributes, the fairness term performs the complementary function of ensuring that the clusters manifest fair distributions of groups defined by sensitive attributes in S. We outline the motivation and construction of the fairness term herein. Attribute-Specific Deviation for a Cluster: Consider a single sensitive attribute S (e.g., gender) among the set of sensitive attributes S. For each data object X, S may take on one value from a set of permissible values. Lets be one such value (e.g., female, for the choice of S as gender). For an ideally fair cluster C, one would expect that the fractional representation of s in C the fraction of objects inC that take the values for S to be close to the fractional representation of s in X. With our intent of generating clusters that are as fair as possible, we seek to generate clusterings such that the deviation between the fractional representations of s in C and X are minimized for each cluster C. For a given cluster C and a choice of value s for S, we model the deviation as simply the square of the absolute differences between the fractional representations in C and X: DS C(s) = \u0012 |{X |X \u2208C \u2227X.S = s}| |C| \u2212|{X |X \u2208X \u2227X.S = s}| |X| \u00132 (2) The deviation, when aggregated over all values of S, yields: DS C = (\u00cd s \u2208V alues(S) DS C(s) |C| , 0 0 |C| = 0 (3) The above aggregation accounts for the fact that DS C(s) is undefined when C is an empty cluster. Domain Cardinality Normalization: Different sensitive attributes may have different numbers of permissible values (or domain cardinalities). For example, race and gender attributes typically take on much fewer values than country of origin. Those attributes with larger domains are likely to yield larger DS C scores because, (i) the deviations are harder to control within (small) clusters given the higher likely scatter, and (ii) there are larger numbers of DS C(s) terms that add up to DS C. In order to ensure that attributes with larger domains do not dominate the fairness term, we normalize the deviation by the number of different values taken by an attribute, yielding NDS C, a normalized attribute-specific deviation: NDS C = DS C |Values(S)| (4) This is then summed up over all attributes in S to yield a single term for each cluster: NDC = \u00d5 S \u2208S NDS C (5) Cluster Weighting: Observe that NDC deviation loss would tend towards 0.0 for very large clusters, since they are obviously likely to reflect dataset-level distributions better; further, an empty cluster would also have NDS C = 0 by definition. Considering the above, a clustering loss term modelled as a simple sum over its clusters, \u0000 \u00cd C \u2208C NDC \u0001 or a cardinality weighted sum, \u0000 \u00cd C \u2208C |C| \u00d7 NDC \u0001, can both be driven towards 0.0 by keeping a lot of clusters empty, and distributing the dataset objects across very few clusters; the boundary case would be a single non-empty cluster. Indeed, this propensity towards the boundary condition is kept in check by the K-Means term; however, we would like our fairness term to drive the search towards more reasonable fair clustering configurations in lieu of simply reflecting a propensity towards highly skewed clustering configurations. Towards achieving this, we weight each cluster\u2019s deviation by the square of it\u2019s fractional cardinality of the dataset. This leads to an overall loss term as follows: \u00d5 C \u2208C \u0012 |C| |X| \u00132 \u00d7 NDC (6) The squared term in the weighting enlarges the NDC terms of larger clusters much more than smaller ones, making it unprofitable to create large clusters; this compensates for the propensity towards skewed clusters as embodied in the loss construction. Overall Loss: The overall fairness loss term is thus: deviationS(C, X) = \u00d5 C \u2208C \u0012 |C| |X| \u00132 \u00d7 \u00d5 S \u2208S \u00cd s \u2208V alues(S) \u0000FrS C(s) \u2212FrS X(s)\u00012 |Values(S)| (7) where FrS C(s) and FrS X(s) are shorthands for the fractional representation of S = s objects in C and X respectively. 4.2 The Optimization Approach Having defined the objective function, we now outline the optimization approach. It is easy to observe that there are three sets of parameters, the clustering assignments for each data object in X, the cluster prototypes that are used in the first term of the objective function, and the fractional representations, i.e., FrS C(s)s, used in the fairness term. Unlike K-Means, given the more complex construction, it is harder to form a closed-form solution for the cluster assignments. Thus, from a given estimate of all three sets of parameters, we step over each data object X \u2208X in round-robin fashion, updating its cluster assignment, and making consequent changes in cluster prototypes and fractional representations. One set of round-robin updates forms an iteration, with multiple such iterations performed until convergence or until a maximum threshold of iterations is reached. 4.2.1 Cluster Assignment Updates. At \u03bb = 0, FairKM defaults to K-Means where the cluster assignments are determined only by proximity to the cluster prototype (over attributes in N). At higher values of \u03bb, FairKM cluster assignments are increasingly swayed by considerations of representational fairness of S attributes within clusters. It may be noted that the cluster assignments are used in both the terms of the FairKM objective, in different ways. This makes a closed form estimation of cluster assignments harder to arrive at. This leads us to a round-robin approach of determining cluster assignments. When each X is considered, the cluster prototypes as well as the current cluster assignments of all other objects, i.e. X \u2212{X }, are kept unchanged. The cluster assignment for X is then estimated as: Cluster(X) = arg min C OC+(X \u2208C) (8) \fFor the candidate object X, we evaluate the value of the objective function by changing X\u2019s cluster membership from the present one to each cluster, C + (X \u2208C) indicating a corresponding change in the clustering configuration retaining all other objects\u2019 present cluster assignments. X is then assigned to the cluster for which the minimum value of O is achieved. While this may look as a simple step, implementing it naively is computationally expensive. However, easy optimizations are possible when one observes the dynamics of the change and how it operates across the two terms. We now outline a simple way of affecting the cluster assignment decision. First, let X\u2019s current cluster assignment be C\u2032; the cluster assignment step can be equivalently written as: Cluster(X) = arg min C \u03b4(O)X \u2208C\u2032\u2192X \u2208C (9) where \u03b4O indicates the change in O when the respective cluster assignment change is carried out. This can be expanded into the changes in the two terms in the objective function as follows: \u03b4(O)X \u2208C\u2032\u2192X \u2208C = \u03b4(K-Means term)X \u2208C\u2032\u2192X \u2208C+\u03bb\u00d7\u03b4(deviation term)X \u2208C\u2032\u2192X \u2208C (10) We now detail the changes in the respective terms separately. Change in K-Means Term: We now outline the change in the K-Means term by moving X fromC\u2032 toC. As may be obvious, this depends only on attributes in N. We model the cluster prototypes as simply the average of the objects within the cluster. The change in the K-Means term is the sum of (i) the change in the K-Means term corresponding to C\u2032 brought about by the exclusion of X from it, and (ii) the change in the K-Means term corresponding to C brought about by the inclusion of X within it. We discuss them below. Consider a single numeric attribute N \u2208N, for simplicity. Through excluding X fromC\u2032, the N attribute value of the cluster prototype of C\u2032 undergoes the following change: C\u2032.N \u2192 \u0014\u0012 C\u2032.N \u2212X.N |C\u2032| \u0013 \u00d7 |C\u2032| |C\u2032 \u22121| \u0015 (11) where C\u2032 is overloaded to refer to the cluster and the cluster prototype (to avoid clutter), all values referring to those prior to exclusion of X. The term after the \u2192stands for the N attribute value for the new cluster prototype. As indicated, it is computed by the removal of the contribution from X from the cluster prototype, followed by re-normalization, now that C\u2032 has one fewer object within it. The change in the K-Means term for N corresponding to C\u2032 is then as follows: \u03b4Xout KM(C\u2032, N) = \u0012 \u00d5 X \u2032\u2208C\u2032,X \u2032,X (X \u2032.N \u2212New(C\u2032.N))2 \u0013 \u2212 \u0014\u0012 \u00d5 X \u2032\u2208C\u2032,X \u2032,X (X \u2032.N \u2212C\u2032.N)2 \u0013 + (X.N \u2212C\u2032.N)2 \u0015 (12) where New(C\u2032.N) is the new estimate of C\u2032.N as outlined in Eq. 11. The first term corresponds to the K-Means loss in the new configuration (after exclusion of X), whereas the sum of the second and third terms correspond to that prior to exclusion of X. Analogous to the above, the new centroid computation for C and the change in the K-Means terms are outlined below: C.N \u2192 \u0014\u0012 C.N \u00d7 |C| |C + 1| \u0013 + X.N C + 1 \u0015 (13) \u03b4XinKM(C, N) = \u0014\u0012 \u00d5 X \u2032\u2208C,X \u2032,X (X \u2032.N \u2212New(C.N))2 \u0013 + (X.N \u2212New(C.N))2 \u0015 \u2212 \u0012 \u00d5 X \u2032\u2208C,X \u2032,X (X \u2032.N \u2212C.N)2 \u0013 (14) It may be noticed that the computation of the changes above only involve X and other objects in C and C\u2032. In particular, the other clusters and their objects do not come into play. So far, we have computed the changes for only one attribute N. The overall change in the K-Means term is simply the sum of these changes across all attributes in N. \u03b4(K-Means term)X \u2208C\u2032\u2192X \u2208C = \u00d5 N \u2208N \u0012 \u03b4Xout KM(C\u2032, N) + \u03b4XinKM(C, N) \u0013 (15) Change in Fairness Term: We now outline the construction of the change in the fairness term. As earlier, we start by considering a single clusterC\u2217, a single attribute S, and a single value s within it. The fairness term from Eq. 7 can be written as follows: dev(C\u2217,S = s) = C\u22172 \u00d7 \u0012\u0012 C\u2217 s C\u2217 \u00132 + \u0012 Xs X \u00132 \u22122C\u2217 s Xs C\u2217X \u0013 X2 \u00d7 |Values(S)| (16) where each set (C\u2217and X) is overloaded to represent both itself and its cardinality (to avoid notation clutter), and their suffixed versions (C\u2217 s and Xs) are used to refer to their subsets containing their objects which take the value S = s. The above equation follows from the observation that FrS C\u2217(s) = C\u2217 S C\u2217and analogously for X. When an object changes clusters from C\u2032 to C, there is a change in the terms associated with both clusters, as in the previous case. The change in the origin cluster C\u2032 works out to be the follows: \u03b4Xoutdev(C\u2032,S = s) = 1 X2 \u00d7 |Values(S)| \u00d7 \u0014\u0012 Xs X \u00132 (1 \u22122C\u2032)+ I(X.S = s)(1 \u22122C\u2032 s) \u22122 \u0012 Xs X \u0013 \u0012 I(X.S = s)(1 \u2212C\u2032) \u2212C\u2032 s \u0013\u0015 (17) where I(.) is an indicator function, and C\u2032 and C\u2032 s denote the cardinalities before X is taken out of C\u2032. We omit the derivation for space constraints. Intuitively, to nudge clusters towards fairness, we would like to incentivize removal of objects with S = s from C\u2032 when C\u2032 is overpopulated with such objects (i.e., C\u2032 s is high). This is evident in the \u2212(C\u2032 s \u00d7 I(X.S = s)) component; when C\u2032 s is high, removal of an object with S = s entails a bigger reduction in the objective. The analogous change in the target cluster C, is as follows: \u03b4Xindev(C,S = s) = 1 X2 \u00d7 |Values(S)| \u00d7 \u0014\u0012 Xs X \u00132 (1 + 2C)+ I(X.S = s)(1 + 2Cs) \u22122 \u0012 Xs X \u0013 \u0012 I(X.S = s)(1 + C) + Cs \u0013\u0015 (18) \fwhere C and Cs denote the cardinalities before X is inserted intoC. Given that we are inserting X intoC, the fairness intuition suggests that we should disincentivize addition of objects with s when C already has too many of such objects. This is reflected in the (Cs \u00d7 I(X.S = s)) term; notice that this is exactly the same term as in the earlier case, but with a different sign. Thus, the overall fairness term change is as follows: \u03b4(deviation term)X \u2208C\u2032\u2192X \u2208C = \u00d5 S \u2208S \u00d5 s \u2208V alues(S) \u0012 \u03b4Xoutdev(C\u2032,S = s) + \u03b4Xindev(C,S = s) \u0013 (19) This completes all the steps required for Eq. 9. Based on the change in the cluster assignment, the cluster prototypes and fractional representations are to be updated. 4.2.2 Cluster Prototype Updates. Once a new cluster has been finalized for X, the origin and target cluster prototypes are updated according to Eq. 12 and Eq. 14 respectively. 4.2.3 Fractional Representation Updates. The FrS C\u2032(s)s and FrS C(s)s need to be updated to reflect the change in the cluster assignment of X. These are straightforward and given as follows: \u2200S\u2200s \u2208Values(S), FrS C\u2032(s) = ( C\u2032 s\u22121 C\u2032\u22121 if X.S = s C\u2032 s C\u2032\u22121 if X.S , s (20) \u2200S\u2200s \u2208Values(S), FrS C(s) = ( Cs+1 C+1 if X.S = s Cs C\u2032+1 if X.S , s (21) where the C, C\u2032, Cs and C\u2032 s values above are cardinalities of the respective sets prior to the update to X\u2019s cluster assignment. Alg. 1 FairKM Input. Dataset X, attribute sets S and N, number of clusters k Hyper-parameters: Fairness Weighting \u03bb Output. Clustering C 1. Initialize k clusters randomly 2. Set cluster prototypes as Cluster Centroids 3. while(not yet conver\u0434ed and max. iterations not reached) 4. \u2200X \u2208X, 5. Set Cluster(X) using Eq. 9 (and Eq. 10 through Eq. 19) 6. Update cluster prototypes as outlined in Sec 4.2.2 7. Re-estimate the FrS C(s) using Eq. 20 and Eq. 21 8. Return the current clustering assignments as C 4.3 FairKM Algorithm Having outlined the various steps, the FairKM algorithm can now be summarized in Algorithm 1. The method starts with a random initialization of clusterings (Step 1) and proceeds iteratively. Within each iteration, each object is considered in round-robin fashion, executing three steps in sequence: (i) updating the cluster assignment of X (Step 5), (ii) updating the cluster prototypes to reflect the change in cluster assignment of X (Step 6), and (iii) updating the fractional representations correspondingly (Step 7). The significant difference in construction from K-Means is due to the inter-dependency in cluster assignments; the cluster assignment for X depends on the current cluster assignments for all other objects X \u2212{X }, due to the construction of the FairKM objective as reflected in the update steps. The updates proceed as long as the clustering assignments have not converged or a pre-specified maximum number of iterations have not reached. 4.3.1 Complexity: The time complexity of FairKM is dominated by the cluster assignment updates. Within each iteration, for each X (|X| of them) and each cluster it could be re-assigned to (k of them), the deviation needs to be computed for both the (i) K-Means term, and the (ii) fairness term. First, considering the K-Means term, it may be noted that each other object in X would come into play once, either as a member of X\u2019s current cluster (in Eq. 12) or as a member of a potential cluster to which X may be assigned (in Eq. 14). This yields an overall complexity of each K-Means deviation computation being in O(|X||N|). Second, considering the fairness deviation computation, it may be seen as a simple computation (Eq. 17 and 18) that can be completed in constant time. This computation needs to be performed for each attribute in S and each value of the attribute (consider m as the maximum number of values across attributes in S), yielding a total complexity of O(|S|m) for each fairness update computation. With the updates needing to be computed for each new candidate cluster, the overall complexity of Step 5 would be O(|X||N|k + |S|mk). Step 6 is in O(|X||N|) whereas Step 7 is simply in O(|S|m). With the above steps having to be performed for each X and for each iteration, the overall FairKM complexity works out to be in O(|X|2|N|kl + |X||S|mkl) where l is the number of iterations. While the quadratic dependency on the dataset size makes FairKM much slower than simple K-Means (which is linear on dataset size), FairKM compares very favorably against other fair clustering methods (e.g., exact fairlet decomposition [6] is NP-hard, and even the proposed approximation is super-quadratic) which are computationally intensive. 4.4 FairKM Extensions We outline two extensions to the basic FairKM outlined earlier which was intended towards handling numeric non-sensitive attributes and multi-valued sensitive attributes. 4.4.1 Extension to Numeric Sensitive Attributes. FairKM is easily adaptable to numeric sensitive attributes (e.g., age for cases where that is appropriate). If all attributes in S are numeric, the fairness loss term in Eq. 7 would be written out as: deviationS(C, X) = \u00d5 C \u2208C \u0012 |C| |X| \u00132 \u00d7 \u00d5 S \u2208S (C.S \u2212X.S)2 (22) where C.S and X.S indicate the average value of the numeric attribute S across objects in C and X respectively. When there are a mix of multi-valued and numeric attributes, the inner term would take the form of Eq. 7 and Eq. 22 for multi-valued and numeric attributes respectively. These entail corresponding changes to the update equations which we do not describe here for brevity. 4.4.2 Extension to allow Sensitive Attribute Weighting. In certain scenarios, some sensitive attributes may need to be considered more important than others. This may be due to historical reasons based on a legacy of documented high discrimination on certain attributes, or due to visibility reasons where discrimination on certain attributes (e.g., gender, race and sexual orientation) being more visible than others (e.g., country of origin). The FairKM framework could easily be extended to allow for differential attribute-specific weighting by changing the deviation term to be as follows: \fdeviationS(C, X) = \u00d5 C \u2208C \u0012 |C| |X| \u00132 \u00d7 \u00d5 S \u2208S wS \u00d7 \u00cd s \u2208V alues(S) \u0000FrS C(s) \u2212FrS X(s)\u00012 |Values(S)| (23) Attributes that are more important for fairness considerations can then be assigned a higher weight, i.e. wS, which would lead to their loss being amplified, thus incentivizing FairKM to focus more on them for fairness, consequently leading to a higher representational fairness over them, within the clusters in the output. The wS terms would then also affect the update equations. 5 EXPERIMENTAL STUDY We now detail our experimental study to gauge and quantify the effectiveness of FairKM in delivering good quality and fair clusterings against state-of-the-art baselines. We first outline the datasets in our experimental setup, followed by a description of the evaluation measures and baselines. This is then followed by our results and an analysis of the results. 5.1 Datasets We use two real-world datasets in our empirical study. The datasets are chosen to cover very different domains, attributes and dataset sizes, to draw generalizable insights from the study. First, we use the popular Adult dataset from UCI repository [7]; this dataset is sometimes referenced as the Census Income dataset and contains information from the 1994 US Census. The dataset has 32561 instances, each instance represented using 13 attributes. Among the 13 attributes, 5 are chosen to form the set of sensitive attributes, S. These are {marital status, relationship status, race, \u0434ender, native country}. The number of values taken by each of the sensitive attributes are shown in Table 3. The set of non-sensitive attributes, N, pertain to age, work class (2 attributes), education (2 attributes), occupation, fiscal information (2 attributes) and number of working hours. The dataset has been widely used for predicting income as belonging to one of > 50k$ or <= 50k$. We first undersample the dataset to ensure parity across this income class attribute that we do not use in the clustering process. The total number of instances after undersampling is 15682. Second, we use a dataset2 of 161 word problems from the domain of kinematics. Kinematics is the study of motion without considering the cause of motion. The problems in this dataset is categorized into various types as indicated in Table 2. The complexity of a word problem typically depends on the type. For example, Type 1 problems are easier to solve (in terms of the equations required) compared to Type 5 problems. Table 4 shows the number of problems of each of the above types in the dataset. Given such a dataset of word problems from kinematics domain, we are interested in the task of clustering the word problems such that the proportional representation of problems of a particular type in a cluster reflects its representation in the entire dataset. In the application scenario of automatic construction of multiple questionnaires (one from each cluster) from a question bank, the fair clustering task corresponds to ensuring that each questionnaire contains a reasonable mix of problem types. This ensures that there is minimal asymmetry between the different questionnaires generated by a clustering, in terms of overall hardness. For the fair clustering formulation, thus, the problem types 2https://github.com/savithaabraham/Datasets form the set of 5 sensitive binary attributes, S. The lexical representation of each word problem, as a 100 dimensional vector using Doc2Vec models [15], forms the set of numeric attributes in N. Given our fairness consideration, we consider achieving a fair proportion of word problem types within each cluster that reflects their proportion across the dataset. It may be noted that the Adult and Kinematics datasets come from different domains (Census and Word Problems/NLP respectively), have different sizes of non-sensitive attribute sets (8 and 100 attributes in N respectively), different kinds of sensitive attribute sets (multi-valued and binary respectively) and have widely varying sizes (15k and 161 respectively). An empirical evaluation over such widely varying datasets, we expect, would inspire confidence in the generalizability of empirical results. 5.2 Evaluation Having defined the task of fair clustering in Section 3, it follows that a fair clustering algorithm would be expected to perform well on two sets of evaluation metrics, those that relate to clustering quality over N and those that relate to fairness over S. We now outline such evaluation measures below, in separate subsections. 5.2.1 Clustering Quality. These measure how well the clustering fares in generating clusters that are coherent on attributes in N, and do not depend on attributes in S. These could include: \u2022 Silhouette Score (SH): Silhouette [19] measures the separatedness of clusters, and quantifies a clustering with a score in [\u22121, +1], higher values indicating well-separated clusters. \u2022 Clustering Objective (CO): Clustering objective functions such as those employed by K-Means [16] measure how much observations deviate from the centroids of the clusters they are assigned to, where lower values indicate coherent clusters. In particular, the K-Means objective function is: \u00d5 C \u2208C \u00d5 X \u2208C distN(X,C) (24) where C stands for both a cluster in the clustering C as well as the prototype object for the cluster, and distN(., .) is the distance measure computed over attributes in N. \u2022 Deviation from S-blind Clusterings: S-blind clusterings may be thought of achieving the best possible clusters for the task when no fairness considerations are imposed. Thus, among two clusterings of similar fairness, that with lower deviation from S-blind clusterings may be considered desirable. A fair clustering can be compared with a S-blind clustering using the following two measures: \u2013 Centroid-based Deviation (DevC): Consider each clustering to be represented as a set of cluster centroids, one for each cluster within the clustering. The sum of pair-wise dotproducts between centroid pairs, each pair constructed using one centroid from the fair clustering and one from the Sblind clustering, would be a measure of deviation between the clusterings. Such measures have been used in generating disparate clusterings [11]. \u2013 Object pair-wise Deviation (DevO): Consider each pair of objects from X, and one clustering (either of S-blind and fair); the objects may belong to either the same cluster or to different clusters. The fraction of object pairs from X where the same/different verdicts from the two clusterings disagree provide an intuitive measure of deviation between clusterings. \fType Description 1:Horizontal Motion The object involved is in a horizontal straight line motion. 2:Vertical motion with an initial velocity The object is thrown straight up or down with a velocity. 3:Free fall The object is in a free fall. 4:Horizontally projected The object is projected horizontally from a height. 5:Two-dimensional The body is projected with a velocity at an angle to the horizontal. Table 2: Kinematics Word Problem Types Attribute No. of values Marital status 7 Relationship status 6 Race 5 Gender 2 Native country 41 Table 3: Adult Dataset: Number of possible values for each sensitive attribute Type Count 1 Horizontal motion 60 2 Vertical motion with an initial velocity 36 3 Free fall 15 4 Horizontally projected 31 5 Two-dimensional 19 Table 4: Kinematics Dataset: #Problems of each Type 5.2.2 Fairness. These measure the fairness of the clustering output from the (fair) clustering algorithm. Analogous to clustering quality measures that depend only on N, the fairness measures we outline below depend only on S and are independent of N. As outlined earlier, we quantify unfairness as the extent of deviation between representations of groups defined using attributes in S in the dataset and each cluster in the clustering output. Consider a multi-valued attribute S \u2208S, which can take on t values. The normalized distribution of presence of each of the t values in X yields a t-length probability distribution vector XS. A similar probability distribution can then be computed for each cluster C in the clustering C, denoted CS. Different ways of measuring the cluster-specific deviations {. . . ,dev(CS, XS), . . .} and aggregating them to a single number yield different quantifications of fairness, as below: \u2022 Average Euclidean (AE): This measures the average of clusterlevel deviations, deviations quantified using euclidean distance between representation vectors (i.e., XS and CSs). Since clusters may not always be of uniform sizes, we use a clustercardinality weighted average. AES = \u00cd C \u2208C |C| \u00d7 ED(CS, XS) \u00cd C \u2208C |C| (25) where ED(., .) denotes the euclidean distance. \u2022 Average Wasserstein (AW): In this measure, the deviation is computed using Wasserstein distance in lieu of Euclidean, as used in [21], with other aspects remaining the same as above. \u2022 Max Euclidean (ME): Often, just optimizing for average fairness across clusters is not enough since there could be a very skewed (small) cluster, whose effect may be obscured by other clusters. It is often the case that one or few clusters get picked from a clustering to be actioned upon. Thus, the maximum skew is of interest as an indicative upper bound on the unfairness the clustering could cause if any one of its clusters is chosen for further action. \u2022 Max Wasserstein (MW): This uses Wasserstein instead of Euclidean, using the same formulation as Max Euclidean. When there are multiple attributes in S, as is often the case, the average of the above measures across attributes in S provides aggregate quantifications. As may be evident, the above constructions work only for categorical attributes; however, a similar set of measures can be readily devised for numeric attributes in S. With our datasets containing only categorical attributes among S, we do not outline the corresponding metrics for numeric attributes, though they follow naturally. We are unable to apply some popular fairness evaluation metrics such as balance [6] due to them being devised for binary attributes. 5.3 Baselines We compare our approach against two baselines. The first is that of S-blind K-Means clustering, that performs K-Means clustering on data using the attributes in N alone. This baseline is code-named K-Means (N). K-Means (N) will produce the most coherent clusters on N as its objective function just focuses on maximizing intra-cluster similarity and minimizing inter-cluster similarity over N, unlike FairKM that has an additional fairness constraint which may result in compromising the coherence goal. Comparing the two enables us to evaluate the extent to which cluster coherence is traded off by FairKM in generating fairer clusters. The second baseline is the approach described in [22] which is a fair version of K-Means clustering for scenarios involving a single multi-valued sensitive attribute. We will refer to this baseline as ZGYA from here, based on the names of the authors. Since it is designed for a single multi-valued sensitive attribute and cannot handle multiple sensitive attributes within its formulation, we invoke ZGYA multiple times, separately for each attribute in S. Each invocation is code-named ZGYA(S) where S is the sensitive attribute used in the invocation. We also report results for similar runs of FairKM, where we consider just one of the attributes in S as sensitive at a time. The comparative evaluation between FairKM and ZGYA enables studying the effectiveness of FairKM formulation over that of ZGYA in their relative effectiveness of trading off coherence for fairness. 5.4 Setting \u03bb in FairKM From Eq. 1, it may be seen that the K-Means term has a contribution from each object in X, whereas the fairness term (Eq. 7) aggregates cluster level contributions. This brings a disparity in that the former has |X|/k times as many terms as the latter. Further, it may be noted that the fairness term aggregates deviations \fbetween cluster level fractional representations and dataset level fractional representation. The fractional representation being an average across objects in the cluster, each object can only influence 1/|C| of it, where |C| is the cluster cardinality. On an average, across clusters, |C| = |X|/k. Thus, the fairness term has |X|/k fewer terms, each of whom can be influenced by an object to a fraction of 1/(|X|/k). To ensure that the terms are of reasonably similar sizes, so the clustering quality and fairness concerns be given similar weighting, the above observations suggest that \u03bb be set to \u0000 |X| k \u00012. From our empirical observations, we have seen that the FairKM behavior varies smoothly around this setting. Based on the above heuristic, we set \u03bb to 106 for the Adult dataset, and 103 for the Kinematics dataset, given their respective sizes. We will empirically analyze sensitivity to \u03bb in Section 5.7. We set max iterations to 30 in FairKM instantiations. 5.5 Clustering Quality and Fairness 5.5.1 Evaluation Setup. In each of our datasets, there are five sensitive (i.e., S) attributes. FairKM can be instantiated with all of them at once, and we do so with appropriate values of \u03bb (106 or 103, as mentioned in Section 5.4). We perform 100 such instantiations, each with a different random seed, and measure the clustering quality and fairness evaluation measures (fairness measures computed separately for each attribute in S as well as the average across all attributes in S) outlined in Section 5.2. We take the mean values across the 100 instantiations to arrive at a single robust value for each evaluation measure for FairKM. An analogous setting is used for our first baseline, the S-blind K-Means (denoted K-Means (N)), as well. Our second baseline, ZGYA, unlike FairKM, needs to be instantiated with one S attribute at a time. Given this setting, we adopt different mechanisms to compare FairKM against ZGYA across clustering quality and fairness evaluation measures. First, for clustering quality, we instantiate ZGYA separately with each attribute in S and compute an average value for each evaluation measure across random initializations as described previously. This yields one value for each evaluation measure for each attribute in S, which we take an average of, and report as the clustering quality of Avg. ZGYA. Second, for fairness, we adopt a synthetic favorable setting for ZGYA to test FairKM against. For each attribute S \u2208S, we consider the fairness metrics (AE, AW, ME, MW) obtained by the instantiation of ZGYA over only that attribute (averaged across random initializations, as earlier). This is compared to the fairness metrics obtained for S by the FairKM instantiation that considers all attributes in S. In other words, for each S \u2208S, we benchmark the single cross-S instantiation of FairKM against separate S-targeted instantiations of ZGYA. We also report the average across these separate comparisons across attributes in S. For K-Means style clustering formulations, the number of clusters k, is an important parameter. We experiment with two values for k, viz., 5 and 15, for the Adult dataset, whereas we use k = 5 for the Kinematics dataset, given its much smaller size. 5.5.2 Clustering Quality. The clustering quality results appear in Table 5 (Adult dataset) and Table 7 (Kinematics dataset), with the direction against each evaluation measure indicating whether lower or higher values are more desirable. For clustering quality metrics that depend only on attributes in N, we use KMeans (N) as a reference point since that is expected to perform well, given that it does not need to heed to S and is not held accountable for fairness. Thus, FairKM is not expected to beat K-Means (N); the lesser the degradation from K-Means (N) on various clustering quality metrics, the better it may be regarded to be. We compare FairKM and Avg. ZGYA across the results tables, highlighting the better performer on each evaluation measure by boldfacing the appropriate value. On the Adult dataset (Table 5), it may be seen that FairKM performs better than Avg. ZGYA on seven out of eight combinations, with it being competitive with the latter on the eighth. FairKM is seen to score significantly better than Avg. ZGYA on clustering objective (CO) and silhoutte score (SH), with the gains on the deviation metrics (DevC and DevO) being more modest. It may be noted that CO and SH may be regarded as more reliable measures, since they evaluate the clustering directly. In contrast, DevO and DevC evaluate the deviation against reference K-Means (N) clusterings; these deviation measures penalize deviations even if those be towards other good quality clusterings that may exist in the dataset. The trends from the Adult dataset hold good for the Kinematics dataset as well (see Table 7), confirming that the trends generalize well across datasets of widely varying character. Overall, our results indicate that FairKM is able to generate much better quality clusterings than Avg. ZGYA, when gauged on attributes in N. 5.5.3 Fairness. The fairness evaluation measures for the Adult and Kinematics datasets appear in Tables 6 and 8 respectively; it may be noted that lower values are desirable on all evaluation measures, given that they all measure deviations. In these results, which include a synthetically favorable setting for ZGYA (as noted earlier), the top block indicates the average results across all attributes in S, with the following result blocks detailing the results for the specific parameters in S. The overarching summary of this evaluation suggests, as indicated in the top-blocks across the two tables, that FairKM surpasses the baselines with significant margins. The % impr column indicates the gain achieved by FairKM over the next best competitor. The percentage improvements recorded are around 35 + % on an average for the Adult dataset, whereas the corresponding figure is higher, at around 60 + % for the Kinematics dataset. We wish to specifically make a few observations from the results. First, the closest competitor to FairKM is ZGYA on the Kinematics dataset, whereas K-Means (N) curiously outperforms ZGYA quite consistently on the Adult dataset. This indicates that ZGYA is likely more suited to settings where the number of values taken by the sensitive attribute is less. In our case, the Kinematics dataset has all binary attributes in S, whereas Adult dataset has sensitive attributes that take as many as 41 different values. Second, while FairKM is designed to accommodate S attributes that take many different values, the fairness deviations appear to degrade, albeit at a much lower pace than ZGYA, as attributes take on very many values. This is indicated by the lower performance (with small margins) on the native country (41 values) attribute at k = 5 in Table 6. However, promisingly, it is able to utilize the additional flexibility that is provided by larger ks to ensure higher rates of fairness on them. As may be seen, FairKM recovers well to perform significantly better on native country at k = 15. This indicates that FairKM will benefit from a higher flexibility in cluster assignment (with higher k) when there are a number of (high cardinality) attributes to ensure fairness over. Third, the FairKM formulation targets to minimize overall fairness and does not specifically nudge it towards ensuring good performance on the max measures (ME and MW) that quantify the worst deviation across clusters. Thus, it\u2019s design allows to choose higher fairness in multiple clusters even at the expense of disadvantaging fairness in one or few clusters, which is indeed undesirable. The performance on ME and MW \fEvaluation k=5 k=15 Measure K-Means (N) Avg. ZGYA FairKM K-Means (N) Avg. ZGYA FairKM CO \u2193 1120.9112 10791.8311 1345.1688 837.9785 4095.8366 1235.2859 SH \u2191 0.7212 0.0557 0.3918 0.6076 0.0573 0.3747 DevC \u2193 0.0 8.4597 8.4707 0.0 39.3615 13.1244 DevO \u2193 0.0 0.0306 0.0233 0.0 0.0360 0.0256 Table 5: Clustering quality on Adult Dataset FairKM vs. Average across {ZGYA(S)|S \u2208S}, shown with K-Means(N). S Evaluation k=5 k=15 Attribute Measure K-Means(N) ZGYA(S) FairKM FairKM Impr(%) K-Means(N) ZGYA(S) FairKM FairKM Impr(%) Mean AE 0.0459 0.1201 0.0278 39.5357 0.0537 0.1289 0.0295 45.0796 across S AW 0.0161 0.0370 0.0087 45.7857 0.0194 0.0398 0.0094 51.7043 Attributes ME 0.2063 0.8729 0.1457 29.4002 0.2475 0.7810 0.1542 37.6985 MW 0.0740 0.1235 0.0502 32.0985 0.0753 0.1262 0.0542 28.0040 Results for Each Sensitive Attribute in S below. Marital Status AE 0.0792 0.0886 0.0539 31.9408 0.0853 0.1318 0.0558 34.5263 AW 0.0182 0.0159 0.0132 16.5650 0.0191 0.0258 0.0136 28.4239 ME 0.3055 0.7356 0.2578 15.6087 0.3572 0.6365 0.2607 27.0042 MW 0.0573 0.0890 0.0592 -3.3881 0.0566 0.0952 0.0604 -6.6317 Rel. Status AE 0.0711 0.1743 0.0486 31.5656 0.0808 0.1903 0.0500 38.1517 AW 0.0197 0.0371 0.0146 25.8744 0.0219 0.0429 0.0150 31.3346 ME 0.3331 0.7796 0.2717 18.4487 0.3823 0.7804 0.2777 27.3667 MW 0.0732 0.1205 0.0760 -3.8026 0.0750 0.1439 0.0776 -3.4770 Race AE 0.0163 0.0564 0.0066 59.2251 0.0168 0.0647 0.0079 53.0164 AW 0.0053 0.0154 0.0023 55.9473 0.0055 0.0162 0.0028 48.9813 ME 0.0385 1.0085 0.0266 30.8822 0.0565 1.2175 0.0336 40.6276 MW 0.0126 0.1159 0.0092 27.3039 0.0165 0.1142 0.0115 30.2523 Gender AE 0.0529 0.2535 0.0183 65.3039 0.0711 0.2256 0.0208 70.7472 AW 0.0370 0.1153 0.0130 64.9210 0.0499 0.1122 0.0147 70.4913 ME 0.3324 0.9793 0.1487 55.2713 0.4028 1.0201 0.1697 57.8731 MW 0.2254 0.2568 0.1051 53.3681 0.2262 0.2671 0.1200 46.9680 Native Country AE 0.0101 0.0276 0.0113 -11.2331 0.0146 0.0323 0.0130 10.9108 AW 0.0005 0.0013 0.0006 -15.2027 0.0007 0.0015 0.0006 4.7201 ME 0.0221 0.8612 0.0236 -6.4585 0.0385 0.2506 0.0292 24.1555 MW 0.0012 0.0354 0.0016 -25.8608 0.0020 0.0107 0.0015 25.6292 Table 6: Fairness evaluation on Adult Dataset S-blind K-Means, Single invocation of FairKM on all S attributes, Separate Invocations of ZGYA on each attribute in S. (Note: This is a synthetic favorable setting for ZGYA, to stress test FairKM against ZGYA). Evaluation K-Means (N) Avg. ZGYA FairKM CO \u2193 145.6441 164.4703 148.1003 SH \u2191 0.0390 -0.0001 0.0149 DevC \u2193 0.0 1.1844 1.1241 DevO \u2193 0.0 0.0032 0.0038 Table 7: Clustering quality on Kinematics Dataset FairKM vs. Average across {ZGYA(S)|S \u2208 S}, shown with KMeans(N). suggest that such trends are not widely prevalent, with FairKM recording reasonable gains on ME and ME. However, cases such as marital status in Table 6 and Type-3 in Table 8 suggest that is a direction in which FairKM could improve. Finally, the overall summary from Tables 6 and 8 suggest that FairKM delivers much fairer clusters on S attributes, and records significant gains over the baselines, in our empirical evaluation. 5.6 FairKM vs. ZGYA Having compared FairKM against ZGYA for fairness in a synthetic setting that was favorable to the latter in the previous section, we now do a more direct comparison here. In particular, we consider comparing the FairKM and ZGYA instantiations with each sensitive attribute separately, which offers a more level Figure 1: Adult Dataset: AW Comparison setting. Figure 1 illustrates the comparison on the AW evaluation measure over the Adult dataset for each S attribute with ZGYA(S) and FairKM(S) values shown separated by the FairKM (All) value in between them; all these are values obtained with k = 5. The FairKM (All) is simply FairKM instantiated with all attributes in S, which was used in the comparison in the previous section. As may be seen, with FairKM(S) focusing on just the chosen attribute (as opposed to FairKM (All) that needs to spread attention across all attributes in S), FairKM(S) is able to achieve better values for AW. Thus, FairKM(S) is seen to beat ZYGA(S) by larger margins than FairKM (All), as expected. The Race attribute shows \fS Attribute Metric K-Means (N) ZGYA (S) FairKM FairKM Impr(%) Mean AE 0.1704 0.1183 0.0172 85.4311 across S AW 0.1021 0.0766 0.0120 84.3660 Attributes ME 0.3744 0.2571 0.1488 42.1364 MW 0.2083 0.1676 0.0852 49.1420 Results for Each Sensitive Attribute in S below. Type-1 AE 0.2567 0.1821 0.0148 91.8775 AW 0.1289 0.1000 0.0103 89.7246 ME 0.4909 0.3502 0.1673 52.2397 MW 0.2828 0.2321 0.1004 56.7159 Type-2 AE 0.2145 0.1481 0.0163 88.9722 AW 0.1213 0.0994 0.0113 88.6729 ME 0.5116 0.3398 0.1600 52.9166 MW 0.2149 0.1931 0.0888 54.0235 Type-3 AE 0.0759 0.0604 0.0178 70.5473 AW 0.0535 0.0427 0.0123 71.2578 ME 0.1935 0.1270 0.1527 -20.2176 MW 0.1206 0.0898 0.0754 16.0235 Type-4 AE 0.1631 0.1009 0.0152 84.9649 AW 0.1079 0.0708 0.0107 84.9541 ME 0.3605 0.2410 0.1263 47.5836 MW 0.2103 0.1662 0.0770 53.6570 Type-5 AE 0.1415 0.0999 0.0221 77.8973 AW 0.0989 0.0703 0.0154 78.0243 ME 0.3155 0.2273 0.1375 39.5175 MW 0.2128 0.1569 0.0846 46.1075 Table 8: Fairness evaluation on Kinematics Dataset Sblind K-Means, Single invocation of FairKM on all S attributes, Separate Invocations of ZGYA on each attribute in S. (Note: This is a synthetic favorable setting for ZGYA, to stress test FairKM against ZGYA). Figure 2: Adult Dataset: MW Comparison Figure 3: Kinematics Dataset: AW Comparison a different trend, with FairKM(S) recording a slightly higher AW than FairKM(S). While we believe this is likely to be due to an Figure 4: Kinematics Dataset: MW Comparison Figure 5: Kinematics Dataset: (CO and SH) vs. \u03bb Figure 6: Kinematics Dataset: (DevC and DevO) vs. \u03bb unusually high skew in the race attribute where 87% of objects take the same single value, this warrants further investigation. Figure 2 presents the corresponding chart for MW evaluation measure, and offers similar high-level trends as was observed for AW. The corresponding charts for the Kinematics dataset appear in Figures 3 and 4 respectively. Over the much smaller Kinematics dataset, the gains by FairKM(S) over FairKM (All) are more pronounced in MW with both techniques recording reasonably similar AW numbers. The observed trends were seen to hold for AE and ME evaluation measures as well, those charts excluded for brevity. To summarize the findings across the datasets, it may be seen that FairKM(S) may be seen to beat the ZGYA(S) baseline with larger margins than FairKM (All) on an average, as desired. \fFigure 7: Kinematics Dataset: Fairness Metrics vs. \u03bb 5.7 FairKM Sensitivity to \u03bb We now study FairKM\u2019s sensitivity to it\u2019s only parameter \u03bb, the weight for the fairness term. With increasing \u03bb, we expect FairKM to fare better on fairness measures with corresponding degradations in the clustering quality measures. The vice versa is expected to hold with decreasing \u03bb. We observed such desired trends across Adult and Kinematics datasets, with changes being slower and steadier for the larger Adult dataset. This is on expected lines with the number of parameters such as clustering assignments being larger on the Adult dataset. In the interest of focusing on the smaller dataset, we outline the changes with \u03bb on clustering quality and fairness measures on the Kinematics dataset, when \u03bb is varied from 1000 to 10000. The variations on the CO and SH measures are illustrated in Figure 5, whereas the variations on DevC and DevO are plotted in Figure 6. We use both sides of the Y-axis to plot the measures which widely vary in terms of their ranges; the axis used is indicated in the legend. As may be seen from them, CO, SH and DevO record slow and steady degradation (the Y-axis is stretched to highlight the region of the change; it may be noted that the quantum of change is very limited) with increasing \u03bb. The degradation in DevC, however, is more jittery, while the direction of change remains on expected lines. The fairness deviation measures are plotted against varying \u03bb (once again, on both Y-axes) in Figure 7. They record gradual but steady improvements (being deviations, they are better when low) with increasing \u03bb, on expected lines. Overall, it may be seen that FairKM moves steadily but gradually towards fairness with increasing \u03bb, as desired. 6" + } + ], + "Stanley Simoes": [ + { + "url": "http://arxiv.org/abs/2212.14467v1", + "title": "Cluster-level Group Representativity Fairness in $k$-means Clustering", + "abstract": "There has been much interest recently in developing fair clustering\nalgorithms that seek to do justice to the representation of groups defined\nalong sensitive attributes such as race and gender. We observe that clustering\nalgorithms could generate clusters such that different groups are disadvantaged\nwithin different clusters. We develop a clustering algorithm, building upon the\ncentroid clustering paradigm pioneered by classical algorithms such as\n$k$-means, where we focus on mitigating the unfairness experienced by the\nmost-disadvantaged group within each cluster. Our method uses an iterative\noptimisation paradigm whereby an initial cluster assignment is modified by\nreassigning objects to clusters such that the worst-off sensitive group within\neach cluster is benefitted. We demonstrate the effectiveness of our method\nthrough extensive empirical evaluations over a novel evaluation metric on\nreal-world datasets. Specifically, we show that our method is effective in\nenhancing cluster-level group representativity fairness significantly at low\nimpact on cluster coherence.", + "authors": "Stanley Simoes, Deepak P, Muiris MacCarthaigh", + "published": "2022-12-29", + "updated": "2022-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CY" + ], + "main_content": "Introduction Fairness in ML [13] has seen much scholarly activity in recent times. Within the broader umbrella of fair ML, several fair clustering methods have also been developed [11]. Most of these endeavours, starting from a pioneering work by Chierichetti et al [12], have considered ensuring proportional representation of sensitive groups \u2013 such as those de\ufb01ned on race and gender \u2013 within each cluster in the output; this is often referred to as group fairness [16]. Recent formulations of fair clustering are able to incorporate considerations to fairness along multiple sensitive attributes together [2]. Such representational fairness of sensitive groups may be seen as the application of the notion of proportional representation, aka statistical parity [7], within clustering. \u2217School of EEECS, Queen\u2019s University Belfast, UK (ssimoes01@qub.ac.uk) \u2020School of EEECS, Queen\u2019s University Belfast, UK (deepaksp@ acm.org) \u2021School of HAPP, Queen\u2019s University Belfast, UK (M. MacCarthaigh@qub.ac.uk) It may be argued that simply ensuring representation of sensitive groups within each cluster is insu\ufb03cient, especially within the popular paradigm of centroid clustering, pioneered by the k-means clustering algorithm [24]. In centroid clustering, each cluster is represented by a prototype, often informally referred to as the centroid. A data object\u2019s proximity to its cluster centroid is a key criterion determining the quality of the clustering. For example, an object that happens to be very close to its centroid could be thought of as being \u2018better represented\u2019 within the clustering than another that happens to be much further away from its centroid. Recent work on fair clustering has sought to deepen uniformity of distance-to-centroid, dubbed as representativity fairness [26], across all dataset objects. Representativity fairness seeks to ensure deeper levels of uniformity of representativity (i.e., centroid distance) across objects, regardless of their sensitive attribute membership, and may thus be regarded as an instantiation of individual fairness [16]. The notion of representativity has been extended to the framework of sensitive groups independently by Abbasi et al [1] and Ghadiri et al [19]. While the methodologies they propose are di\ufb00erent, they are strikingly similar in the nature of the fairness sought. They consider the aggregate representativity (they use the term representation) across objects within each sensitive group, and target proportionality along such group-speci\ufb01c aggregates across sensitive groups. For example, within a social media pro\ufb01le clustering scenario, this implies that the mean distance of female pro\ufb01les from their centroids should be as close as possible to the mean distance of male pro\ufb01les from their centroids. This fairness condition targets that groups are fairly treated across all the clusters in the output cluster assignment. Against this backdrop, we note two issues with such representativity aggregations across sensitive groups and clusters. First, note that representativity is quanti\ufb01ed as the distance of an object to its cluster centroid1. This construction of representativity, we observe, does 1Representativity, in a literal sense, may be argued to mean centroid proximity than centroid distance; however, since it is easier to deal with distances, we consistently refer minimisation of centroid distance as representativity enhancement, which is essentially maximising centroid proximity. arXiv:2212.14467v1 [cs.LG] 29 Dec 2022 \fFigure 1: Illustration of representativity across di\ufb00erently sized clusters not yield well to cross-cluster comparisons, especially when there could be clusters with widely varying sizes. Consider the toy 2-cluster clustering in Figure 1 where the black and circled white objects represent di\ufb00erent sensitive groups, with the white object representing the centroid; also assume that the grey area is full of other objects which we do not consider for now. The black object has better representativity (i.e., better centroid proximity) in the larger cluster, and vice versa. However, given the size of the clusters, the same numeric difference in representativity should be regarded as almost insigni\ufb01cant in the larger cluster, and very consequential in the smaller cluster, when conditioned on the relative cluster sizes. In other words, considering representativity against the backdrop of the cluster sizes, we may assume that the circled white group gets a much better overall representativity than the black group. In sharp contrast to such intuitive judgement, observe that the simple aggregate of representativity misleadingly puts the black and circled white groups on an equal footing. Second, note that the fairness goal seeks to bring about parity across sensitive groups. For a clustering task, as in the case of earlier work on statistical parity for group fairness (e.g., Chierichetti et al [12], Abraham et al [2]), the natural granularity for evaluating clustering fairness is at the level of each cluster. The cross-cluster aggregation of representativities is neither aligned with pragmatism (e.g., decision scenarios almost always involve cluster-level decision-making [25]) or underpinning political justi\ufb01cations (e.g., in contrast to extant justi\ufb01cations of statistical parity [20]). We additionally note that the same sensitive group may be advantaged in one cluster and disadvantaged in another. The paradigm of cross-cluster aggregation of representativities allows such e\ufb00ects to cancel out, thus acting as a veneer to conceal potentially deep levels of intra-cluster unfairness. 1.1 Our Contributions We propose a novel formulation of group fairness extending the notion of representativity fairness along sensitive groupings in the data. Our cluster-level group representativity fairness targets cluster-level fairness, thus mitigating identi\ufb01ed crosscluster representativity aggregation issues. We propose a novel fair clustering method targeting to optimise for cluster-level group representativity fairness, and illustrate, through extensive empirical evaluations, that our method is able to achieve high degrees of fairness on appropriate evaluation metrics. 2 Related Work We now brie\ufb02y summarise research on group fairness and individual fairness in fair clustering. 2.1 Group Fairness The notion of group fairness, which targets proportional representation of sensitive attribute groups, was pioneered by Chierichetti et al [12] in 2017. Since then, group fair clustering research has diversi\ufb01ed into considering settings such as spectral clustering [22], hierarchical clustering [4] and probabilistic scenarios [17]. Along another facet, variants of group fairness notions such as capped representation [3] and equitable group-level representativity [1, 19] have been explored. There have also been fairness conceptualisations such as proportional fairness [10] that straddle boundaries between group and individual fairness. 2.2 Individual Fairness At a risk of overgeneralisation, one may paraphrase individual fairness as ensuring that similar objects be given similar outcomes (i.e., cluster membership in clustering). While this is inarguably aligned with the clustering objective of maximising intra-cluster similarities and minimising inter-cluster similarities, clustering algorithms can only achieve the objective on a best-e\ufb00ort basis, making some shortfall inevitable. Speci\ufb01c formulations of individually fair clustering methods have explored curtailing the shortfall through ways such as using randomness [8]. P & Abraham [26] considers optimising for the uniformity of centroid-proximity (aka representativity) across objects. The focus on representativity continues in much recent work, with Vakilian & Yalciner [27] considering an object-speci\ufb01c upper bound for representativity and Chakrabarti et al [9] proposing that representativities between pairs of objects be bounded above by a speci\ufb01ed multiplication factor. 2.3 Positioning Our Work Our notion of clusterlevel group representativity fairness is a novel conceptualisation of fairness unexplored in previous work. Our usage of cluster-level group fairness quanti\ufb01cation as an \fintermediate level between individual and group makes it distinct from previous work on both fairness streams. 3 Background We brie\ufb02y outline the formulation of the popular kmeans clustering algorithm [24], as essential background to describe our method. Consider a set of relational data objects X = {. . . x . . .} where x \u2208Rd, which need to be partitioned into k clusters, C = {. . . C . . .}. kmeans uses an EM-style [14] optimisation framework to optimise for: (3.1) X C\u2208C X x\u2208C ||x \u2212\u00b5C||2 where \u00b5C \u2208Rd is the centroid or prototype object of cluster C. The reader would notice that the objective relates to a given cluster assignment C; the EM-style optimisation starts with a given cluster assignment, and iteratively re\ufb01nes the cluster assignment and centroids in order to minimise the objective in Equation 3.1. As a generic clustering method, k-means is obviously agnostic to sensitive attribute groups to which individual objects belong. Using the terminology of representativity (i.e., centroid proximity), one may observe that k-means seeks to optimise for the sum of representativities across all objects in the dataset. This may be regarded as a Benthamite [5] utilitarian objective that seeks the greatest good for the greatest number. Abbasi et al [1] and Ghadiri et al [19] note that this can result in cluster centroids representing sensitive groups di\ufb00erently, often favouring one over the other. This would have implications especially in cases where the centroids are used to summarise objects in the clusters. To mitigate the disparity in representativities of sensitive groups, Abbasi et al [1] and Ghadiri et al [19] independently proposed a new fair k-means objective. Suppose each data object in X belongs to one of several groups S (e.g., female, male, etc.) de\ufb01ned across a sensitive attribute S = {. . . S . . .} (e.g., sex). The fair k-means objective [1, 19] seeks to optimise for the worst-o\ufb00sensitive group at the cluster assignment level: (3.2) max S\u2208S 1 |S| X C\u2208C X x\u2208C\u2229S ||x \u2212\u00b5C||2 The objective in Equation 3.2 su\ufb00ers from a variant of the issue observed in the k-means objective \u2013 the averaging property can allow a sensitive group to have a low representativity in one cluster which is discounted by moderately high representativities in the other clusters. Further, a single sensitive group may not be the worst-o\ufb00in all clusters. Towards this, we turn our attention to mitigating group unfairness at the cluster level, rather than the cluster assignment level. 4 Problem Statement Our notion of cluster-level group representativity fairness targets to minimise the disparity in the representativities of the sensitive groups within each cluster. We assume a single sensitive attribute S. Consider a cluster C within a cluster assignment C. The representativity loss of the set of objects taking S within C is denoted as: (4.3) f(S, C) = 1 |C \u2229S| X x\u2208C\u2229S ||x \u2212\u00b5C||2 Thus for each cluster C, we obtain a representativity vector with one component per sensitive group. We say that a cluster assignment is fair if all clusters in it are fair, and we say that a cluster is fair if its corresponding representativity vector is uniform, i.e., the gap between the best-o\ufb00and worst-o\ufb00groups is as small as possible. In view of this novel formulation of cluster-level group representativity fairness, we consider developing a metric towards quantifying adherence to the notion. In this paper, we look at the case where the number of sensitive groups is not large. Accordingly, we capture the representativity disparity between the best-o\ufb00and worst-o\ufb00groups simply by taking the di\ufb00erence in their representativities, given by: (4.4) CD(C) = max S\u2208S f(S, C) \u2212min S\u2208S f(S, C) min S\u2208S f(S, C) ! \u00d7 100 CD(C), or cluster disparity, quanti\ufb01es the di\ufb00erence between the representativity losses for the best-o\ufb00and worst-o\ufb00groups within cluster C, expressed as a percentage. Evidently, we would ideally like CD(C) to be 0.0 which is achieved when the same representativity loss is experienced by each sensitive group. The formulation of CD(C) as a percentage allows a fair basis for aggregating this across clusters of di\ufb00erent sizes and shapes, thus addressing the \ufb01rst issue with representativity aggregation pointed out in Section 1 \u2013 that of cluster sizes. We aggregate this across clusters in the cluster assignment C to arrive at a single measure for the overall cluster assignment: (4.5) ACD(C) = 1 |C| X C\u2208C CD(C) The average cluster disparity abbreviated as ACD(C), being an average of percentages, evaluates to a nonnegative value, with a lower value indicating a smaller \fdisparity between the worst-o\ufb00group and best-o\ufb00group in individual clusters, and consequently a fairer clustering. 5 Proposed Method We now describe our fair clustering method focused on cluster-level group representativity fairness. We start by describing our objective function followed by the optimisation framework. 5.1 Objective Function The notion of fairness outlined in Section 4 targets to enhance the uniformity of the representativity vector for each cluster. This can be trivially achieved by having a high representativity loss for all sensitive groups (as noted by Abbasi et al [1]) which would make the resulting cluster assignment of poor utility. Towards improving uniformity of the representativities, we take cue from contemporary theories in political philosophy and focus on mitigating the representativity loss (i.e., f(., .)) experienced by the worsto\ufb00group within each cluster. The worst-o\ufb00group in a cluster C is given by: (5.6) arg max S\u2208S f(S, C) This notion espouses the ethos across several popular philosophical theories including the concern for the most vulnerable within the famed di\ufb00erence principle [18] of distributive justice due to John Rawls2. Thus, the overall clustering objective we address here is to generate a cluster assignment that comprises coherent clusters (the singular focus of algorithms such as k-means) where, additionally, the representativity loss for the worst-o\ufb00group within each cluster is mitigated as much as possible. Given that the utilitarian consideration of cluster coherence (one that classical kmeans also targets to optimise) would be in apparent tension with the cluster-level group representativity fairness consideration, we look to deepen the latter at as little detriment to the former as possible. Given our intent of mitigating the representativity loss of the worst-o\ufb00sensitive group within each cluster, we model our objective function as simply the aggregate of representativity losses of the worst-o\ufb00cluster-level group: (5.7) J(C) = X C\u2208C max S\u2208S \u0012 1 |C \u2229S| X x\u2208C\u2229S ||x \u2212\u00b5C||2 \u0013 This, as may be noted, captures the fairness ethos espoused by our formulation of cluster-level group representativity fairness, albeit using signi\ufb01cantly di\ufb00erent methodology. 2https://en.wikipedia.org/wiki/John_Rawls 5.2 Optimisation Framework To estimate the parameters \u2013 C and {. . . \u00b5C . . .} \u2013 in our objective function (Equation 5.7), we follow the same EM-style iterative procedure as used in the classical k-means algorithm that alternates between the following E and M steps: 1. E-step: estimate the cluster assignment for each x \u2208X keeping the set of cluster centroids {. . . \u00b5C . . .} \ufb01xed, and 2. M-step: estimate the set of cluster centroids {. . . \u00b5C . . .} keeping the cluster assignment stationary. The method we propose falls in the category of postprocessing methods which is in line with classical works in fair clustering such as Ahmadian et al [3], Bera et al [6], Esmaeili et al [17], Kleindessner et al [21], among others. It operates over the output of a utilitarian clustering algorithm \u2013 the classical k-means algorithm; we use the generated cluster centroids as the initial cluster centroids for our method. We now describe the details of the E and M steps within our optimisation framework. 5.3 E-step: Estimating the Cluster Assignment Given the set of cluster centroids {. . . \u00b5C . . .}, we need to assign objects to clusters that minimise the objective function in Equation 5.7. Towards operationalising this, given the complexity of the objective function, we perturb the existing cluster assignment by greedily reassigning objects \u2013 in round-robin fashion \u2013 to new clusters such that the value of the objective function decreases. Thus, if an object x \u2208X is reassigned from cluster C to cluster C\u2032, the new cluster assignment C\u2032 is given by: (5.8) C\u2032 = C \\ \b C, C\u2032\t \u222a \b C \\ {x}, C\u2032 \u222a{x} \t Algorithm 1 outlines our greedy approach. Within each E-step, this entails trying out O(|X| \u00d7 k) cluster reassignments, k \u22121 per object. It is notable that the change between J(C\u2032) and J(C) where C and C\u2032 di\ufb00er in the membership of a single object can be computed very e\ufb03ciently without a full dataset-wide estimation. While we do not include the details of such e\ufb03cient computations herein, the computation of such incremental changes is similar in spirit to what is outlined within Section 4.2.1 in Abraham et al [2]. 5.4 M-step: Estimating the Cluster Centroids We now look at estimating the set of cluster centroids {. . . \u00b5C . . .} given the cluster assignment C. Our goal is to minimise the objective function in Equation 5.7 while keeping the cluster assignment C \ufb01xed. Since the \fAlgorithm 1 Reassign 1: for all x \u2208X do 2: for all C\u2032 \u2208C do 3: Obtain the new cluster assignment C\u2032 using Equation 5.8 4: if J(C\u2032) < J(C) then 5: C \u2190C\u2032 max operator is not di\ufb00erentiable, we use the following di\ufb00erentiable approximation [26]: (5.9) max y\u2208Y g(y) \u22481 \u03c6 loge X y\u2208Y exp \u0000\u03c6 \u00d7 g(y) \u0001 where \u03c6 is a large enough positive constant that ampli\ufb01es the signi\ufb01cance of the largest distance. Substituting Equation 5.9 in Equation 5.7 gives us a di\ufb00erentiable approximation for the objective function: (5.10) Japprox = X C\u2208C 1 \u03c6 loge X S\u2208S exp \u0010 \u03c6 \u00d7 f(S, C) \u0011 We model our optimisation steps along the framework of gradient descent where the intent is to move along the negative gradient. Towards this, we note that the derivative of Japprox with respect to \u00b5C evaluates to: \u2202 \u2202\u00b5C Japprox = \u22122 \u00d7 P S\u2208S \u0010 w(S, C) \u00d7 1 |C\u2229S| \u00d7 P x\u2208C\u2229S (x \u2212\u00b5C) \u0011 P S\u2208S w(S, C) (5.11) where (5.12) w(S, C) = exp \u0010 \u03c6 \u00d7 f(S, C) \u0011 Much like in the E-step, regularities in the construction of Equation 5.11 allow for e\ufb03cient incremental gradient computation. Equations 5.11 and 5.12 are together used to update the set of cluster centroids {. . . \u00b5C . . .} within the gradient descent framework: (5.13) \u00b5C \u2190\u00b5C \u2212\u03b7 \u2202 \u2202\u00b5C Japprox where \u03b7 \u2208R+ is the learning rate. The M-step is outlined in Algorithm 2. To summarise, within each M-step, each cluster centroids, chosen in roundrobin fashion, is updated once using the update in Equation 5.13. Algorithm 2 Update 1: for all C \u2208C do 2: Compute \u2202 \u2202\u00b5C Japprox using Equations 5.11 and 5.12 3: Perform a single update for \u00b5C using Equation 5.13 5.5 Stopping Condition Being initialised using a utilitarian clustering such as one that optimises for the k-means objective, each iteration progressively moves the cluster assignment away from the utilitarian starting point towards cluster-level group representativity fairness. This monotonicity makes the choice of the stopping condition critical to ensure that a good tradeo\ufb00between the utilitarianism and fairness is achieved within the eventual cluster assignment. In other words, the stopping condition for our iterative procedure is determined by the trade-o\ufb00between two factors: 1. the deterioration in utility, i.e., the classical kmeans objective, given that our method starts from the cluster assignment generated by the classical kmeans algorithm, and 2. the improvement in our objective function towards group fairness. It may be noted that both the terms above may be computed intrinsically, i.e., without using any form of external input, and are thus available to the optimisation approach. This allows us to formulate a contrastive stopping condition that can choose good trade-o\ufb00points in the optimisation. At any iteration, we consider the overall percentage improvement in our fairness objective over the past ten iterations, and the overall percentage deterioration in the k-means objective across the past ten iterations, and break out of the iterative loop when the latter exceeds the former. This is complemented by another stopping condition where we break the iterative process after 200 iterations, if the contrastive stopping condition was not reached earlier. 6 Empirical Evaluation We now detail our empirical evaluation. We start by describing the experimental setup, followed by results and analyses. All underlying research data and code will be made publicly available upon acceptance. 6.1 Experimental Setup 6.1.1 Datasets The datasets used in our empirical evaluation are based on the publicly available Adult \fdataset sensitive attribute non-sensitive attributes objects name groups Adult sex 2 26 30718 race 5 CreditCard SEX 2 77 30000 Table 1: Datasets used (after processing) dataset3 and CreditCard dataset4 from the UCI repository [15]. Both datasets contain data about humans and include sensitive information such as race and sex, making them exceedingly popular in the algorithmic fairness community [11, 23]. In case of the Adult dataset, we use workclass, education-num, occupation, capital-gain, capital-loss, hours-per-week as non-sensitive attributes. We do not use fnlwgt (refer to Le Quy et al [23]) and education (which is the same as education-num). The two sensitive attributes we consider are sex and race. In case of the CreditCard dataset, we use LIMIT BAL, PAY x (6 attributes), BILL AMTx (6 attributes), PAY AMTx (6 attributes) as the non-sensitive attributes. The sensitive attribute we consider is SEX. The two datasets are processed as follows. Among the non-sensitive attributes, the continuous ones are standardised (i.e., zero mean and unit variance) on the same lines as Ghadiri et al [19], and the categorical ones are one hot encoded. Table 1 contains information about the processed versions of the datasets used in our evaluation. 6.1.2 Baselines We benchmark our method against the classical k-means algorithm, which also forms our initialisation. Ours being a novel fairness formulation that has been hitherto unexplored in literature, there are no suitable state-of-the-art baseline clustering methods to compare against. Abbasi et al [1] and Ghadiri et al [19], being based on group representativity fairness, are related but are optimised for a di\ufb00erent fairness objective; we compare our method against Ghadiri et al\u2019s Fair-Lloyd [19]. Since the available implementation of Fair-Lloyd5 can only handle binary sensitive attributes, we do not evaluate it on the Adult dataset with race as the sensitive attribute. We do not compare with Abbasi et al [1] because it is more suited for the facility location problem rather than the k-means clustering problem. 6.1.3 Parameter Con\ufb01guration In all experiments, we set the number of clusters to be generated k=5. We run Fair-Lloyd using its default settings. In 3https://archive.ics.uci.edu/ml/datasets/adult 4https://archive.ics.uci.edu/ml/datasets/default+of+ credit+card+clients 5https://github.com/samirasamadi/SociallyFairKMeans dataset sensitive attribute method average cluster disparity % change Adult sex classical k-means 24.6498 0.00% Fair-Lloyd 25.0157 +1.48% ours 9.4066 -61.84% race classical k-means 80.8993 0.00% Fair-Lloyd N/A N/A ours 31.9480 -60.51% CreditCard SEX classical k-means 21.6800 0.00% Fair-Lloyd 16.8425 -22.31% ours 4.8128 -77.80% Table 2: Average cluster disparity (lower is better). The % change column shows the increase (in red) or decrease (in green) in the average cluster disparity with the classical k-means algorithm as the reference. dataset sensitive attribute method k-means objective % change Adult sex classical k-means 2.0849 0.00% Fair-Lloyd 2.3548 +12.95% ours 2.5833 +23.91% race classical k-means 2.0860 0.00% Fair-Lloyd N/A N/A ours 2.3299 +11.69% CreditCard SEX classical k-means 8.6176 0.00% Fair-Lloyd 8.5498 -0.79% ours 9.8836 +14.69% Table 3: k-means objective (lower is better). The % change column shows the increase/decrease in the kmeans objective with the classical k-means algorithm as the reference. case of our method, we set \u03c6=3, and learning rate \u03b7=0.01 for gradient descent. All numbers reported are averaged over 100 runs with random initial centroids provided to the classical k-means algorithm (whose generated centroids are subsequently used as the initial centroids of our method). The numbers reported for Fair-Lloyd are also averaged over 100 runs. Following Ghadiri et al [19]\u2019s setup, we allow Fair-Lloyd and our method to run for 200 iterations for the purpose of our empirical evaluation. 6.2 Results 6.2.1 Fairness vs Utility Our method optimises for a fairer objective function di\ufb00erent from the classical utilitarian one. It is widely accepted that increase in fairness almost always causes decrease in utility; it would be of interest to look at the fairness gains obtained due to our method and the corresponding loss in utility. Here, we quantify unfairness with our evaluation metric \u2013 average cluster disparity (Equation 4.5), and \fdataset sensitive attribute method our objective % change Adult sex classical k-means 3.1302 0.00% Fair-Lloyd 3.5403 +13.10% ours 2.1299 -31.96% race classical k-means 4.1148 0.00% Fair-Lloyd N/A N/A ours 2.7868 -32.27% CreditCard SEX classical k-means 43.8224 0.00% Fair-Lloyd 39.6524 -9.52% ours 14.3313 -67.30% Table 4: Our objective (lower is better). The % change column shows the increase (in red) or decrease (in green) in our objective with the classical k-means algorithm as the reference. dataset sensitive attribute method fair k-means objective % change Adult sex classical k-means 2.2473 0.00% Fair-Lloyd 2.5453 +13.26% ours 2.7811 +23.75% race classical k-means 2.6185 0.00% Fair-Lloyd N/A N/A ours 2.8538 +8.99% CreditCard SEX classical k-means 8.6489 0.00% Fair-Lloyd 8.5508 -1.13% ours 10.1417 +17.26% Table 5: Fair k-means objective [1, 19] (lower is better). The % change column shows the increase/decrease in the fair k-means objective with the classical k-means algorithm as the reference. utility with the k-means objective6. From Tables 2 and 3, we see that our method is able to signi\ufb01cantly reduce the unfairness (reduction of 61.84%, 60.51%, 77.80% over the classical k-means algorithm on our datasets; 66.72% on average) at the cost of a relatively smaller increase in the utilitarian objective (increase of 23.91%, 11.69%, 14.69% over the classical k-means algorithm; 16.76% on average). Fair-Lloyd, on the other hand, does not perform as well on this fairness metric (average reduction of 10%). Evidently, our method outperforms the baselines on our cluster-level group fairness evaluation metric, i.e., average cluster disparity. 6.2.2 Fairness Objectives In the interest of comparing across the two threads of representativity fairness, we look at how our method compares with the baselines on the two fairness objectives: (i) our objec6We normalise the original k-means objective here, i.e., divide Equation 3.1 by the number of objects in the dataset. 0 20 40 60 80 100 2.2 2.4 2.6 2.8 representativity loss Figure 2: Trends in representativity loss f(., .) of the 5 sensitive groups for the Adult dataset with race as the sensitive attribute (random state=3, cluster 4) over 100 iterations. Lower is better. tive7 (Equation 5.7), and (ii) fair k-means objective [1, 19] (Equation 3.2). In Table 4, we see that our method does indeed improve the representativity of the worst-o\ufb00 sensitive groups (reduction of 31.96%, 32.27%, 67.30% over the classical k-means algorithm; 43.84% on average) which indicates that our method is moving in the right direction. The Fair-Lloyd baseline does not perform as well; this is acceptable as it is not designed for cluster-level group fairness. On the other hand, our method causes the worst-o\ufb00sensitive group at the cluster assignment level to have a worse representativity than the classical k-means algorithm, as seen in Table 5. While at \ufb01rst glance this may seem to be a de\ufb01ciency of our method, we note that any improvement in the representativity of this worst-o\ufb00group would potentially result in another sensitive group that is the worst-o\ufb00in some cluster being further disadvantaged in order to improve the representativity of the former group. This would be unacceptable in cases where more than one sensitive groups are historically disadvantaged (e.g., black and American Indians in case of race) and bene\ufb01tting one would result in disadvantaging the other. 6.2.3 Disparity Trends In order to further observe the behaviour of our method, we look at how the representativity losses of the di\ufb00erent sensitive groups vary within a cluster over iterations. As an illustration, Figure 2 shows the trends in cluster 4 that was initialised with random state=3 on the Adult dataset with race as the sensitive attribute. We notice that over iterations (i) the disparity in the representativities of the best-o\ufb00 and worst-o\ufb00sensitive groups decreases, and (ii) the 7We normalise our objective here, i.e., divide Equation 5.7 by the number of clusters k. \frepresentativity loss of the worst-o\ufb00sensitive group decreases, which is what we expect. Also note in this \ufb01gure that the worst-o\ufb00sensitive group in the cluster changes over iterations; this is a trend that we observe to generally hold for clusters. 7" + } + ], + "Luc De Raedt": [ + { + "url": "http://arxiv.org/abs/2003.08316v2", + "title": "From Statistical Relational to Neuro-Symbolic Artificial Intelligence", + "abstract": "Neuro-symbolic and statistical relational artificial intelligence both\nintegrate frameworks for learning with logical reasoning. This survey\nidentifies several parallels across seven different dimensions between these\ntwo fields. These cannot only be used to characterize and position\nneuro-symbolic artificial intelligence approaches but also to identify a number\nof directions for further research.", + "authors": "Luc De Raedt, Sebastijan Duman\u010di\u0107, Robin Manhaeve, Giuseppe Marra", + "published": "2020-03-18", + "updated": "2020-03-24", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction The integration of learning and reasoning is one of the key challenges in arti\ufb01cial intelligence and machine learning today, and various communities have been addressing it. That is especially true for the \ufb01eld of neuro-symbolic computation (NeSy) [Besold et al., 2017; Garcez et al., 2019], where the goal is to incorporate symbolic reasoning into neural networks. NeSy already has a long tradition, and it has recently attracted a lot of attention from various communities (cf. e.g., the keynotes of Yoshua Bengio and Henry Kautz on this topic at AAAI 2020). Many approaches to NeSy aim at extending neural networks with logical reasoning. Another domain that has a rich tradition in integrating learning and reasoning is that of statistical relational learning and arti\ufb01cial intelligence (StarAI) [Getoor and Taskar, 2007; De Raedt et al., 2016]. But rather than focusing on how to integrate logic and neural networks, it is centred around the question of how to integrate logic with probabilistic graphical models. Despite the common interest in combining logic or symbolic reasoning with a basic paradigm for learning, i.e., probabilistic graphical models or neural networks, it is surprising that there are not more interactions between these two \ufb01elds. This discrepancy is the key motivation behind this survey: it aims at pointing out the similarities between these two endeavours and in this way stimulate more crossfertilization. In doing so, we start from the literature on StarAI because, arguably, there is more consensus on what the key concepts, challenges and issues are in StarAI than in NeSy (cf. the number of tutorials and textbooks on related topics such as [Russell, 2015; De Raedt et al., 2016] but see also [Besold et al., 2017; Garcez et al., 2019]). It turns out that essentially the same issues and techniques that arise in StarAI have to be addressed in NeSy as well. The key contribution of this survey is that we identify a set of seven dimensions that these \ufb01elds have in common and that can be used to categorize both StarAI and NeSy approaches. These seven dimensions are concerned with (1) directed vs undirected models, (2) grounding vs proof based inference, (3) integrating logic with probability and/or neural computation, (4) logical semantics, (5) learning parameters or structure, (6) representing entities as symbols or sub-symbols, and, (7) the type of logic used. We provide evidence for our claim by positioning a wide variety of StarAI and NeSy systems along these dimensions and pointing out analogies between them. This, in turn, allows us to identify interesting opportunities for further research, by looking at areas across the dimensions that have not seen much work yet. Of course, there are also important differences between StarAI and NeSy, the most important one being that the former operates more at the symbolic level, lending itself naturally to explainable AI, while the latter operates more at the sub-symbolic level, lending itself more naturally for computer vision and natural language processing. Unlike some other recent surveys or perspectives on neuro-symbolic computation [Besold et al., 2017; Garcez et al., 2019], the present survey limits itself to a logical and probabilistic perspective, which it inherits from StarAI, and to developments in neuro-symbolic computation that are consistent with this perspective. Furthermore, it focuses on representative and prototypical systems rather than aiming at completeness (which would not be possible given the page limitations). At the same time, unlike many early approaches to neuro-symbolic computation (see [Bader and Hitzler, 2005] for an overview), which focused more on modeling issues and principles, we focus on approaches that are also used for learning. The following sections of the paper each describe a dimension. We summarize various neuro-symbolic approaches along these dimensions in Table 1. Furthermore, for ease of writing, the table mentions for each system the key reference (so that we do not always have to repeat these references). \f2 Directed vs undirected Within the graphical model community there is a distinction between the directed and undirected graphical models [Koller and Friedman, 2009], which has led to two distinct types of StarAI systems. The \ufb01rst generalizes directed models, and resembles Bayesian networks; the second generalizes undirected models like Markov networks or random \ufb01elds. The key difference between the two is that the \ufb01rst class of models indicates a natural direction (sometimes the term \u201ccausal\u201d is used) between the different random variables, while the second one does not. In StarAI, the \ufb01rst category includes wellknown representations such as plate notation [Koller and Friedman, 2009], probabilistic relational models (PRMs) [Friedman et al., 1999], probabilistic logic programs (PLPs) [De Raedt and Kimmig, 2015], and Bayesian logic programs (BLPs) [Kersting and De Raedt, 2007]. Today the most typical and popular representatives of this category are the probabilistic (logic) programs. The second category includes Markov Logic Networks (MLNs) [Richardson and Domingos, 2006] and Probabilistic Soft Logic (PSL) [Bach et al., 2017]. They essentially specify a set of weighted constraints, clauses or formulae. From a logical perspective, the difference amounts to using a form of de\ufb01nite clauses (as in the programming language Prolog) versus the use of full clausal logic or even \ufb01rst order logic. On the one side, a de\ufb01nite clause is an expression of the form h \u2190b1 \u2227... \u2227bn where h and the bi are logical atoms of the form p(t1,..., tm), with p being a predicate of arity m and the ti being terms, that is, constants, variables, or structured terms of the form f (t1,..., tn), where f is a functor and the ti are again terms. On the other side, full clausal logic also allows for formulae of the form h1 \u2228... \u2228hm \u2190b1,..., bn. In the \ufb01rst type of rule the direction of the implication indicates, just like the direction of the arrows in a Bayesian network, what can be inferred from what. In the second type of rule, this relationship is blurred because of the disjunction in the head of the rule, which allows multiple conclusions for the same premises. This explains why the \ufb01rst type of rule is more directly used for inference, while the second more as a constraint. It also re\ufb02ects the kind of knowledge that the user has about the problem. With directed models, one can express that a set of variables has a direct \u201ccausal\u201d in\ufb02uence on another one, while with undirected ones one expresses a kind of (soft) constraints on a set of variables, that is, that the variables are related to one another. Borrowing this view from StarAI, we can devise a \ufb01rst dimension for neuro-symbolic approaches, which relies entirely on the logical perspective outlined above. The \ufb01rst category includes NeSy systems based on Prolog or Datalog, such as Neural Theorem Provers (NTPs) [Rockt\u00e4schel and Riedel, 2017], NLProlog [Weber et al., 2019], DeepProbLog [Manhaeve et al., 2018] and DiffLog [Si et al., 2019]. These systems retain the directed nature of logical inference as they exploit backward chaining. Lifted Relational Neural Networks (LRNNs) [\u0160ourek et al., 2018] and \u2202ILP [Evans and Grefenstette, 2018] are other examples of non-probabilistic directed models, where de\ufb01nite clauses are compiled into a neural network architecture in a forward chaining fashion. The systems that imitate logical reasoning with tensor calculus, Neural Logic Programming (NeuralLP) [Yang et al., 2017] and Neural Logic Machines (NLM) [Dong et al., 2019], are likewise instances of directed logic. The undirected NeSy approaches consider logic as a constraint on the behaviour of a predictive model. A large group of approaches, including Semantic Based regularization (SBR) [Diligenti et al., 2017] and Semantic Loss (SL) [Xu et al., 2018], exploits logical knowledge as a soft constraint over the hypothesis space in a way that favours solutions consistent with the encoded knowledge. SBR implements predicates as neural networks and translates the provided logical formulas into a real valued regularization by means of fuzzy logic, while SL uses marginal probabilities of the target atoms to de\ufb01ne the regularization term and relies on arithmetic circuits [Darwiche, 2011] to evaluate it ef\ufb01ciently. Another group of approaches, including Logic Tensor Networks (LTN) [Donadello et al., 2017], Neural Markov Logic Networks (NMLN) [Marra and Ku\u017eelka, 2019] and Relational Neural Machines (RNM) [Marra et al., 2020] extend MLNs, allowing either predicates (LTN) or factors (NMLN and RNM) to be implemented as neural architectures. Finally, [Rockt\u00e4schel et al., 2015; Demeester et al., 2016] compute ground atoms scores as dot products between relation and entities embeddings; implication rules are then translated into a logical loss by means of continuous relaxation of the implication operator. 3 Grounding vs proofs From a logical perspective there is a model-theoretic and a proof-theoretic perspective to inference. This is clear when looking at the difference between Answer Set Programming and the programming language Prolog. In the model theoretic perspective, one \ufb01rst grounds out the clauses in the theory and then calls a SAT solver (possibly after breaking cycles), while in a proof theoretic perspective, one performs a sequence of inference steps in order to obtain a proof. Grounding is the step whereby a clause c (or formula) containing variables {V1,..., Vk} is replaced by all instances c\u03b8 where \u03b8 is a substitution {V1 = c1,...Vk = ck} and the ci are constants (or other ground terms) appearing in the domain. The resulting clause c\u03b8 is that obtained by simultaneously replacing all variables by the corresponding constants. Usually the grounding process is optimised in order to obtain only those ground clauses that are relevant for the considered inference task. These two perspectives carry over to the StarAI perspective. Many StarAI systems use the logic as a kind of template to ground out the relational model in order to obtain a grounded model and perform inference. This grounded \fmodel can be a graphical model, or alternatively, it can be a ground weighted logical theory on which traditional inference methods apply, such as belief propagation or weighted model counting. This is used in well known systems such as MLNs, PSL, BLPs, and PRMs. Some systems like PRMs and BLPs also use aggregates or combining rules in their knowledge base construction approach. The idea then is to combine multiple conditional probability distributions into one using, e.g., noisy-or. Alternatively, one can follow a proof or trace based approach to de\ufb01ne the probability distribution and perform inference. This is akin to what happens in probabilistic programming (cf. also [Russell, 2015]), in StarAI frameworks such as PLPs, probabilistic databases [Van den Broeck et al., 2017] and probabilistic uni\ufb01cation based grammars such as Stochastic Logic Programs (SLPs) [Muggleton, 1996]. The idea is that a proof will form the basis for probabilistic inference. Just like pure logic supports the model-theoretic and proof-theoretic, both perspectives have been explored in parallel for some of the probabilistic logic programming languages such as ICL [Poole, 2008] and ProbLog [Fierens et al., 2015]. Again this carries over to neuro-symbolic methods. Approaches of NTPs, DeepProblog, \u2202ILP and DiffLog are proof-based. The probabilities or certainties that these systems output are based on the enumerated proofs, and they are also able to learn how to combine them. In contrast, approaches of LRNN, LTNs, RNM, NMLN, NLM and NeuralLP are all based on grounding. Learning in these models is done through learning the (shared) parameters over the ground model and inference is based on possible groundings of the model. 4 Logic vs Probability vs Neural When two paradigms are integrated, examining which of the base paradigms are preserved, and to which extent, tells us a lot about the strengths and weaknesses of the resulting paradigm. In StarAI, the traditional knowledge based model construction approach is to use the logic only to generate a probabilistic graphical model, implying that both the inference and the semantics are pushed inside the graphical model. The effect is that it is often harder to reason at a purely logical level with such systems. What is meant here is that it may become unclear how to apply logical inference rules such as resolution (or extensions that take into account the parameters) to such models or what the effect of applying such rules will be. This is what happens with systems such as PRMs, BLPs, PSL , and MLNs. For instance, in MLNs the addition of the resolvent of two weighted rules, makes it hard to predict the effect on the distribution. On the other hand, the opposite holds for PLPs and its variants. While it is clear what the effect of a logical operation is, it is often harder to directly identify and exploit properties such as conditional or contextual independencies, which are needed for ef\ufb01cient probabilistic inference. This position on the spectrum between logic and probability has a profound in\ufb02uence on the properties of the underlying model. For NeSy, the spectrum involves not only logic and neural networks, but often also probability. It has been argued that when combining different perspectives in one model or framework, such as neural, logic and probabilistic ones, it is desirable to have the originals or base paradigms as a special case, see also [De Raedt et al., 2019]. The vast majority of current NeSy approaches focus on the neural aspect (i.e., they originated as a fully neural method to which logical components have been added). Some of these approaches like LTNs and TensorLog [Cohen et al., 2017] pursue a kind of knowledge-based model construction approach in which the logic is compiled away into the neural network architecture. A different family of NeSy approaches, which includes SL and SBR, turns the logic into a regularization function to provide a penalty whenever the desired logical theory or constraints are violated. This leads to the logic being compiled into the weights of the trained neural network. A small number of NeSy methods, however, retain the focus on logic. Some of these methods start from existing logic (programming) frameworks and extend them with primitives that allow them to interface with neural networks and allow for differentiable operations. Examples include DeepProbLog and DiffLog. Other methods instead take an existing framework and turn it into a differentiable version. The key inference concepts are mapped onto an analogous concept that behaves identically for the edge cases, but is continuous and differentiable in non-deterministic cases. Such methods include \u2202ILP , \u22024 [Bo\u0161njak et al., 2017] and NTPs. Even for methods that focus on logic, it can be useful to map the problem onto an intermediate representation. One such idea concerns performing probabilistic inference by mapping it onto a weighted model counting (WMC) problem. This can then in turn be solved by compiling it into a structure (e.g. an arithmetic circuit) that allows for ef\ufb01cient inference. This has the added bene\ufb01t that this structure is differentiable, which can facilitate the integration between logic based systems and neural networks. DeepProbLog, for example, uses this approach. In [Zuidberg Dos Martires et al., 2019], the authors argue that this intermediate representation can serve as an assembly language for AI. 5 Semantics Traditionally, StarAI combines two semantics: a logical and a probabilistic one. In a logical semantics, atoms are assigned a truth value in the {true,false} set (i.e. {0,1}). In a probabilistic semantics, probability is de\ufb01ned as a measure over sets of possible worlds, where each possible world is an assignment of values to the random variables. This implies that a probabilistic logic semantics de\ufb01nes probability distributions over ground logical interpretations, that is, over sets of ground facts. Prominent examples in StarAI are ProbLog (from the directed side) and Markov Logic (from the undirected one). However, the complexity of inference in probabilistic logic has led to statistical relational \fapproaches (e.g. [Bach et al., 2017]), where the truth values are relaxed in the continuous interval [0,1] and logic operators are turned into real valued functions. This setting is described in terms of fuzzy logic (or soft logic) semantics, mathematically grounded in the t-norm theory. By exploiting the translation of Boolean formulas into real valued functions, the fuzzy semantics allows to exploit algebraic and geometric properties of t-norms (including especially their differentiability) to reduce complexity. The main issue of fuzzy semantics in the context of StarAI is that it is often not exploited to describe problems that are intrinsically vague [Fine, 1975], but, simplistically, as a continuous surrogate of Boolean logic. A side effect of this approximation is that many properties of the original logical theory can be realised in many different ways in their continuous translation. Indeed, the fuzzi\ufb01cation procedure alters the logical properties of the original theory (such as satis\ufb01ability), depending on the particular connectives exploited in the conversion. For example, in the \u0141ukasiewicz t-norm t\u0141(x, y) = max{0, x + y \u22121}, the conjunction can be 0 (i.e. false) even without any of the elements being 0 (e.g. x = y = 0.5). Neuro-symbolic approaches can easily be categorized in terms of the same logical, probabilistic or fuzzy semantics. Neural enhancements of the logic semantics either use neural networks to turn perceptive input to a logical atom or introduce a relaxed version of logical reasoning performed through tensor calculus. An instance of the former is ABL [Dai et al., 2019], which use logical abduction to provide the feedback for a neural model processing the perceptive input. Tensor calculus approaches, such as NLM and NeuralLP , interpret predicates as tensors grounded over all constants in a domain and interpret clauses as a product of those matrices. Neural enhancements of the probabilistic semantics usually reparameterize the underlying distribution in terms of neural components. In particular, DeepProbLog exploits neural predicates to compute the probabilities of probabilistic facts as the output of neural computations over vectorial representations of the constants, which is similar to SL in the propositional counterpart. NMLN and RNM use neural potentials in order to implement factors (or their weights) as neural networks. [Rockt\u00e4schel et al., 2015] computes marginal probabilities as logistic functions over similarity measures between embeddings of entities and relations. Neural enhancements of the fuzzy semantics are usually realised by allowing continuous truth values to be the outcome of a neural process and the differentiability of the corresponding t-norm allows for an easy integration with neural computation frameworks. In particular, SBR and LTN turn atoms into neural networks taking as inputs the feature representation of the constants and returning the corresponding truth value. Similarly, in LRNN and [Wang and Pan, 2019], the output of the neurons of the logical network can be interpreted as fuzzy truth values of the corresponding atoms. Finally, there is a large class of methods [Minervini et al., 2017; Demeester et al., 2016; Cohen et al., 2017; Weber et al., 2019] realised by relaxing logical statements in a numeric way, without giving any other speci\ufb01c semantics, either probabilistic or fuzzy. Here, atoms are assigned scores in R computed by a neural scoring function over embeddings. Numerical approximations are then applied either to combine these scores according to logical formulas or to aggregate proofs scores. The resulting neural architecture is usually differentiable and, thus, trained end-to-end. 6 Learning parameters or structure StarAI distinguishes between two types of learning: structure learning, which corresponds to learning the logical clauses of the model [Kok and Domingos, 2005], and parameter learning in which the probabilities or weights of the clauses have to be estimated [Gutmann et al., 2008; Lowd and Domingos, 2007]. This distinction is less clear in the NeSy setting. Unlike what is common in StarAI, the NeSy approaches do not perform a search through the discrete space of possible clauses, but rather through the space of parameters of such clauses which are typically enumerated by following a template (often with a prede\ufb01ned complexity). Examples of such systems include NTPs, \u2202ILP , DeepProbLog, NeuralLP and DiffLog. Alternatively, one can provide a sketch of the desired program \u2013 a program with certain decisions left blank \u2013 and learn a NeSy model to \ufb01ll out the blanks, such as DeepProbLog and \u22024. A substantial number of approaches tries to leverage the best of both worlds. These ideas include using neural models to guide the symbolic search [Kalyan et al., 2018; Ellis et al., 2018a; Valkov et al., 2018], or using a neural model to produce a program that is then executed symbolically [Ellis et al., 2018b; Mao et al., 2019]. 7 Symbols vs Sub-symbols An important factor in both StarAI and NeSy systems is the representation of entities. StarAI generally represents entities by constants (symbols). But neural methods are numerical by nature and therefore symbols are replaced with sub-symbols, i.e., vectorized representations. If the entity has inherent numerical properties, these could be used as sub-symbols (e.g. the pixel data of an image). However, if this is not the case, a one-hot encoding or learned embedding can be used instead. This, of course, has an impact on the generalizability of the system towards unseen entities, as new embeddings have to be learned for new symbols. Naturally, among the neurosymbolic methods, there is a wide variety in how symbols and sub-symbols are used in representation and reasoning. The idea of mapping entities onto sub-symbols is made very explicit in LTNs, where in a \ufb01rst step, all symbols are replaced with sub-symbols. In DeepProbLog, entities are represented using symbols, but they sometimes have sub-symbolic representations that are only used inside the neural networks. Similarly, in [Lippi and Frasconi, 2009] and RNM, MLNs are conditioned on a feature representation of constants (e.g. images, audio signals, etc.). Finally, among those models exploiting learned embeddings, \fwe \ufb01nd [Rockt\u00e4schel et al., 2015; Minervini et al., 2017; Demeester et al., 2016]. Now that we discussed how entities can be represented by symbols and sub-symbols, let us discuss how they can be used for reasoning. Most methods either only work with logic reasoning on symbols, or perform algebraic operations on sub-symbols. However, some methods can use both simultaneously. A very powerful and elegant mechanism for reasoning about symbols in \ufb01rst order logic is uni\ufb01cation. It is used to reason about equality at the symbolic level. For instance, the atomic expressions p(a, Y ) and p(X, b) can be uni\ufb01ed using the substitution {X = a, Y = b}. Uni\ufb01cation not only works for constants but also for structured terms f (t1,..., tn) where f is a structured term and the ti are constants, variables or structured terms themselves. While uni\ufb01cation is not supported by standard neural networks, reasoning about equality corresponds closely to reasoning about similarity in embedding space. Entities are typically embedded in some metric space, and represented through their embeddings, that is, through sub-symbols. Reasoning typically proceeds by performing algebraic operations (such as vector addition) on these embeddings, and considering the similarity between two entities by using their distance in embedding space. It is quite interesting to see to what extent current neuro-symbolic approaches support uni\ufb01cation on the one hand, and to what extent the use of embeddings has been integrated into the neurosymbolic logics as a kind of soft equality or uni\ufb01cation This idea was implemented in NTPs and NLProlog as soft or weak uni\ufb01cation. In these systems, two entities can be uni\ufb01ed if they are similar, and not just if they are identical. As such, this system can interweave both symbols and subsymbols during inference. For each entity, an embedding is learned and their similarity is determined based on the distance between the embeddings using a radial basis function. However, this potentially adds a lot of different proof paths, which can result in computational issues for larger programs. This problem was solved in later iterations of the system [Minervini et al., 2020]. 8 Type of logic There is a natural ordering of logical representations, starting with propositional logic (only arity 0 predicates), to relational logic (having no structured terms, so only constants and variables as terms, which is also the basis for the Datalog database language), to general \ufb01rst order logic (FOL), and then to logic programs (LP) as in the programming language Prolog. Logic programs are usually restricted to de\ufb01nite clauses, while the semantics of a de\ufb01nite clause program is given by its least Herbrand model, the set of all ground facts that are logically entailed by the program. This contrasts with the standard semantics of \ufb01rst order logic that would also allow for other models. This difference carries over to StarAI, where probabilistic logic programs and Markov Logic inherit their semantics from logic programming, respectively \ufb01rst order logic. This explains, for instance, why Markov Logic\u2019s semantics boils down to a maximum entropy approach when a theory has multiple models (such as a \u2228b), cf. [De Raedt and Kimmig, 2015; De Raedt et al., 2016] for more details. On the other hand, logic programs are also the basis for the programming language Prolog, which implies that they can be used to specify traditional programs such as sorting and data structures such as lists through structured terms. This is relevant especially for those approaches to neurosymbolic computation that are used to synthesize programs from examples. Neuro-symbolic representations typically extend one of these four types of logic: propositional, relational, \ufb01rst order logic, or logic programs. For instance, SL focuses only on the propositional setting. On the other hand, \u2202ILP , NTPs and DiffLog are based on Datalog, which belongs to relational logic segment. LTNs and SBR use fuzzy logic to translate a general FOL theory into a training objective, either isolated or in conjunction with a supervised criterion. Just like Markov Logic, also RNM and NMLN use \ufb01rst order logic to generate a random \ufb01eld. Finally, DeepProbLog, NLProlog and LRNN are examples of neuro-symbolic logic programming frameworks. 9 Open challenges To conclude, we now list a number of challenges for NeSy, which deserve, in our opinion, more attention. Probabilistic reasoning Although relatively few methods explore the integration of logical and neural methods through probabilities perspective, we believe that a probabilistic approach is the best way to principally integrate the two [De Raedt et al., 2019]. There should be further investigation into the applicability of probabilistic reasoning for neuro-symbolic computation. Structure learning While signi\ufb01cant progress has been made on learning the structure of purely relational models (without probabilities), learning StarAI models remains a major challenge due to the complexity of inference and the combinatorial nature of the problem. Incorporating neural aspects complicates the problem even more. NeSy methods have certainly shown potential for addressing this problem (Section 6), but the existing methods are still limited and mostly domain-speci\ufb01c which impedes their wide application. Scaling inference Scalable inference is a major challenge for StarAI and therefore also for NeSy approaches with an explicit logical or probabilistic reasoning component. Investigating to which extent neural methods can help with this challenge by means of lifted (exploiting symmetries in models) or approximate inference, as well as reasoning from the intermediate representations [Abboud et al., 2020], are promising future research directions. Data ef\ufb01ciency A major advantage of StarAI methods, as compared to neural ones, is their data ef\ufb01ciency \u2013 StarAI methods can ef\ufb01ciently learn from small amount of data, whereas neural methods are data hungry. On the other hand, StarAI methods do not scale to big data sets, while neural methods can easily handle them. We believe that \fDimension 1 Dimension 2 Dimension 3 Dimension 4 Dimension 5 Dimension 6 Dimension 7 (D)irected (U)ndirected (G)rounding (P)roofs (L)ogic (P)robability (N)eural (L)ogic (P)robability (F)uzzy (P)arameter (S)tructure (S)ymbols (Sub)symbols (P)ropositional (R)elational (FOL) (LP) \u2202ILP [Evans and Grefenstette, 2018] D P L+N L P S R DeepProbLog [Manhaeve et al., 2018] D P L+P+N P P S+Sub LP DiffLog [Si et al., 2019] D P L+N L P+S S R LRNN [\u0160ourek et al., 2018] D P L+N F P+S S+Sub LP LTN [Donadello et al., 2017] U G L+N F P Sub FOL NeuralLP [Yang et al., 2017] D G L+N L P S R NLM [Dong et al., 2019] D G L+N L P+S S R NLProlog [Weber et al., 2019] D P L+P+N P P+S S+Sub LP NMLN [Marra and Ku\u017eelka, 2019] U G L+P+N P P+S S+Sub FOL NTP [Rockt\u00e4schel and Riedel, 2017] D P L+N L P+S S+Sub R RNM [Marra et al., 2020] U G L+P+N P P S+Sub FOL SL [Xu et al., 2018] U G L+P+N P P S+Sub P SBR [Diligenti et al., 2017] U G L+N F P Sub FOL Tensorlog [Cohen et al., 2017] D P L+N P P S+Sub R Table 1: Taxonomy of a (non-exhaustive) list of NeSy models according to the 7 dimensions outlined in the paper. understanding how these methods can help each other to overcome their complementary weaknesses, is a promising research direction. Symbolic representation learning The effectiveness of deep learning comes from the ability to change the representation of the data so that the target task becomes easier to solve. The ability to change the representation on the symbolic level as well would signi\ufb01cantly increase the capabilities of NeSy systems. This is a major open challenge for which neurally inspired methods could help achieve progress [Cropper, 2019; Duman\u02c7 ci\u00b4 c et al., 2019]. Acknowledgements Robin Manhaeve and Sebastijan Dumancic are funded by the Research Foundation-Flanders (FWO). This work has also received funding from the European Research Council (ERC) under the European Union\u2019s Horizon 2020 research and innovation programme (grant agreement No [694980] SYNTH: Synthesising Inductive Data Models)." + }, + { + "url": "http://arxiv.org/abs/1312.4328v1", + "title": "Probabilistic Programming Concepts", + "abstract": "A multitude of different probabilistic programming languages exists today,\nall extending a traditional programming language with primitives to support\nmodeling of complex, structured probability distributions. Each of these\nlanguages employs its own probabilistic primitives, and comes with a particular\nsyntax, semantics and inference procedure. This makes it hard to understand the\nunderlying programming concepts and appreciate the differences between the\ndifferent languages. To obtain a better understanding of probabilistic\nprogramming, we identify a number of core programming concepts underlying the\nprimitives used by various probabilistic languages, discuss the execution\nmechanisms that they require and use these to position state-of-the-art\nprobabilistic languages and their implementation. While doing so, we focus on\nprobabilistic extensions of logic programming languages such as Prolog, which\nhave been developed since more than 20 years.", + "authors": "Luc De Raedt, Angelika Kimmig", + "published": "2013-12-16", + "updated": "2013-12-16", + "primary_cat": "cs.PL", + "cats": [ + "cs.PL" + ], + "main_content": "Introduction The vast interest in statistical relational learning [Getoor and Taskar, 2007], probabilistic (inductive) logic programming [De Raedt et al., 2008] and probabilistic programming languages [Roy et al., 2008] has resulted in a wide variety of di\ufb00erent formalisms, models and languages. The multitude of probabilistic languages that exists today provides evidence for the richness and maturity of the \ufb01eld, but on the other hand, makes it hard to get an appreciation and understanding of the relationships and di\ufb00erences between the di\ufb00erent languages. Furthermore, most arguments in the literature about the relationship amongst these languages are about the expressiveness of these languages, that is, they state (often in an informal way) that one language is more expressive than another one (implying that the former could be used to emulate the latter). By now, it is commonly accepted that the more interesting question is concerned with the underlying concepts that these languages employ and their e\ufb00ect on the inference mechanisms, as their expressive power is often very similar. However, a multitude of di\ufb00erent probabilistic primitives exists, which makes it hard to appreciate their relationships.1 1Throughout the paper we use the term primitive to denote a particular syntactic and semantic construct that is available in a particular probabilistic programming language, and 1 arXiv:1312.4328v1 [cs.PL] 16 Dec 2013 \fTo alleviate these di\ufb03culties and obtain a better understanding of the \ufb01eld we identify a number of core probabilistic programming concepts and relate them to one another. We cover the basic concepts representing di\ufb00erent types of random variables, but also general modeling concepts such as negation or time and dynamics, and programming constructs such as meta-calls and ways to handle sets. While doing so, we focus on probabilistic extensions of logic programming languages because this is (arguably) the \ufb01rst and best studied probabilistic programming paradigm. It has been studied for over 20 years starting with the seminal work of David Poole [1992] and Taisuke Sato [1995], and now includes languages such as CLP(BN) [Santos Costa et al., 2008], BLPs [Kersting and De Raedt, 2008], ICL [Poole, 2008], PRISM [Sato and Kameya, 2001], ProbLog [De Raedt et al., 2007], LPADs [Vennekens et al., 2004], CP-logic [Vennekens et al., 2009], SLPs [Muggleton, 1996], PROPPR [Wang et al., 2013], P-log [Baral et al., 2009] and Dyna [Eisner et al., 2005]. Another reason for focussing on probabilistic extensions of logic programming languages is that the concepts are all embedded within the same host language, so we can focus on semantics rather than syntax. At the same time, we also relate the concepts to alternative probabilistic programming languages such as Church [Goodman et al., 2008], IBAL [Pfe\ufb00er, 2001], Figaro [Pfe\ufb00er, 2009] and BLOG [Milch et al., 2005] and to some extent also to statistical relational learning models such as RBNs [Jaeger, 2008], Markov logic [Richardson and Domingos, 2006], and PRMs [Getoor et al., 2007]. Most statistical relational learning approaches employ a knowledge-based model construction approach, in which the logic is used as a template for constructing a graphical model. Typical probabilistic programming languages, on the other hand, employ a variant of Sato\u2019s distribution semantics [Sato, 1995], in which random variables directly correspond to ground facts and a traditional program speci\ufb01es how to deduce further knowledge from these facts. This di\ufb00erence explains why we introduce the concepts in the context of the distribution semantics, and discuss approaches to knowledge-based model construction separately. Inference is a key challenge in probabilistic programming and statistical relational learning. Furthermore, the choice of inference approach often in\ufb02uences which probabilistic primitives can be supported. Enormous progress has been made in the past few years w.r.t. probabilistic inference and numerous inference procedures have been contributed. Therefore, we also identify some core classes of inference mechanisms for probabilistic programming and discuss which ones to use for which probabilistic concept. Inference in probabilistic languages also is an important building block of approaches that learn the structure and/or parameters of such models from data. Given the variety of approaches that exist today, a discussion of learning is beyond the scope of this paper. To summarize, the key contributions of this paper are (1) the identi\ufb01cation of a number of core concepts that are used by various probabilistic languages, (2) a discussion of the execution mechanisms that they require, and (3) a positioning of state-of-the-art probabilistic languages and implementations w.r.t. these concepts. Although many of the concepts we discuss are well-described in the literature, some even in survey papers [De Raedt and Kersting, 2003, Poole, 2008], we believe a new and up-to-date survey is warranted due to the rapid the term concept to denote the underlying notion. Di\ufb00erent primitives may hence realize the same concept. 2 \fdevelopments of the \ufb01eld which rapidly renders existing surveys incomplete and even outdated. To the best of our knowledge, this is also the \ufb01rst time that such a wide variety of probabilistic programming concepts and languages, also in connection to inference, is discussed in a single paper. We expect the reader to be familiar with basic language concepts and terms of Prolog [Lloyd, 1989, Flach, 1994]; a quick summary can be found in Appendix A. This paper is organized as follows. We \ufb01rst discuss the distribution semantics (Section 2) and classify corresponding inference approaches according to their logical and probabilistic components (Section 3). Section 4 identi\ufb01es the probabilistic programming concepts. In Section 5, we discuss the relation with statistical relational modeling approaches rooted in graphical models. Section 6 relates the di\ufb00erent inference approaches to the probabilistic programming concepts. 2 Distribution Semantics Sato\u2019s distribution semantics [Sato, 1995] is a well-known semantics for probabilistic logics that has been used many times in the literature, cf. [Dantsin, 1991, Poole, 1993, Fuhr, 2000, Poole, 2000, Sato and Kameya, 2001, Dalvi and Suciu, 2004, De Raedt et al., 2007]. Prominent examples of Prolog-based languages using this semantics include ICL [Poole, 2008], PRISM [Sato and Kameya, 2001] and ProbLog [De Raedt et al., 2007, Kimmig et al., 2011a], even though there exist subtle di\ufb00erences between these languages as we will illustrate later. Sato has de\ufb01ned the distribution semantics for a countably in\ufb01nite set of random variables and a general class of distributions. We focus on the \ufb01nite case here, discussing the two most popular instances of the semantics, based on a set of independent random variables and independent probabilistic choices, respectively, and refer to [Sato, 1995] for details on the general case. 2.1 Probabilistic Facts The arguably most basic instance of the distribution semantics uses a \ufb01nite set of Boolean random variables that are all pairwise independent. Throughout the paper, we use the following running example inspired by the well-known alarm Bayesian network: 0.1 :: burglary. 0.7 :: hears alarm(mary). 0.2 :: earthquake. 0.4 :: hears alarm(john). alarm :\u2212earthquake. (1) alarm :\u2212burglary. calls(X) :\u2212alarm, hears alarm(X). call :\u2212calls(X). The program consists of a set R of de\ufb01nite clauses or rules, and a set F of ground facts f, each of them labeled with a probability p, written as p :: f. We call such labeled facts probabilistic facts. Each probabilistic fact corresponds to 3 \fa Boolean random variable that is true with probability p and false with probability 1\u2212p. We use b, e, hm and hj to denote the random variables corresponding to burglary, earthquake, hears_alarm(mary) and hears_alarm(john), respectively. Assuming that all these random variables are independent, we obtain the following probability distribution PF over truth value assignments to these random variables and their corresponding sets of ground facts F \u2032 \u2286F: PF (F \u2032) = Y fi\u2208F \u2032 pi \u00b7 Y fi\u2208F \\F \u2032 (1 \u2212pi) (2) For instance, the truth value assignment burglary = true, earthquake = false, hears alarm(mary) = true, hears alarm(john) = false, which we will abbreviate as b \u2227\u00ace \u2227hm \u2227\u00achj, corresponds to the set of facts {burglary, hears_alarm(mary)}, and has probability 0.1\u00b7(1\u22120.2)\u00b70.7\u00b7(1\u22120.6) = 0.0336. The corresponding logic program obtained by adding the set of rules R to the set of facts, also called a possible world, is burglary. hears alarm(mary). alarm :\u2212earthquake. (3) alarm :\u2212burglary. calls(X) :\u2212alarm, hears alarm(X). call :\u2212calls(X). As each logic program obtained by \ufb01xing the truth values of all probabilistic facts has a unique least Herbrand model, PF can be used to de\ufb01ne the success probability of a query q, that is, the probability that q is true in a randomly chosen such program, as the sum over all programs that entail q: Ps(q) := X F \u2032\u2286F \u2203\u03b8F \u2032\u222aR| =q\u03b8 PF (F \u2032) (4) = X F \u2032\u2286F \u2203\u03b8F \u2032\u222aR| =q\u03b8 Y fi\u2208F \u2032 pi \u00b7 Y fi\u2208F \\F \u2032 (1 \u2212pi) . (5) Naively, the success probability can thus be computed by enumerating all sets F \u2032 \u2286F, for each of them checking whether the corresponding possible world entails the query, and summing the probabilities of those that do. As \ufb01xing the set of facts yields an ordinary logic program, the entailment check can use any reasoning technique for such programs. For instance, forward reasoning, also known as applying the TP operator, starts from the set of facts and repeatedly uses rules to derive additional facts until no more facts can be derived. In our example possible world (3), we thus start from {burglary, hears_alarm(mary)}, and \ufb01rst add alarm due to the second rule based on burglary. This in turn makes it possible to add calls(mary) using the third rule and substitution X=mary, and \ufb01nally, call is added using the last rule, resulting in the least Herbrand model {burglary, hears_alarm(mary), alarm, calls(mary), call}. This possible world thus contributes to the success probabilities of alarm, calls(mary) and call, but not to the one of calls(john). 4 \fworld calls(john) probability b \u2227\u00ace \u2227hm \u2227\u00achj false 0.1 \u00b7 (1 \u22120.2) \u00b7 0.7 \u00b7 (1 \u22120.4) = 0.0336 b \u2227\u00ace \u2227hm \u2227hj true 0.1 \u00b7 (1 \u22120.2) \u00b7 0.7 \u00b7 0.4 = 0.0224 b \u2227e \u2227hm \u2227\u00achj false 0.1 \u00b7 0.2 \u00b7 0.7 \u00b7 (1 \u22120.4) = 0.0084 b \u2227e \u2227hm \u2227hj true 0.1 \u00b7 0.2 \u00b7 0.7 \u00b7 0.4 = 0.0056 \u00acb \u2227e \u2227hm \u2227\u00achj false (1 \u22120.1) \u00b7 0.2 \u00b7 0.7 \u00b7 (1 \u22120.4) = 0.0756 \u00acb \u2227e \u2227hm \u2227hj true (1 \u22120.1) \u00b7 0.2 \u00b7 0.7 \u00b7 0.4 = 0.0504 Table 1: The possible worlds of program (1) where calls(mary) is true. An alternative to forward reasoning is backward reasoning, also known as SLD-resolution or proving, which we again illustrate for our example possible world (3). It starts from a given query, e.g., call, and uses the rules in the opposite direction: in order to prove a fact appearing in the head of a clause, we have to prove all literals in the clause\u2019s body. For instance, based on the last rule, to prove call, we need to prove calls(X) for some instantiation of X. Using the third rule, this means proving alarm, hears_alarm(X). To prove alarm, we could use the \ufb01rst rule and prove earthquake, but this fails for our choice of facts, as there is no rule (or fact) for the latter. We thus backtrack to the second rule for alarm, which requires proving burglary, which is proven by the corresponding fact. Finally, we prove hears_alarm(X) using the fact hears_alarm(mary), substituting mary for X, which completes the proof for call. Going over all possible worlds in this way, we obtain the success probability of calls(mary), Ps(calls(mary)) = 0.196, as the sum of the probabilities of six possible worlds (listed in Table 1). Clearly, enumerating all possible worlds is infeasible for larger programs; we will discuss alternative inference techniques from the literature in Section 3. For ease of modeling (and to allow for countably in\ufb01nite sets of probabilistic facts), probabilistic languages such as ICL and ProbLog use non-ground probabilistic facts to de\ufb01ne sets of random variables. All ground instances of such a fact are mutually independent and share the same probability value. As an example, consider a simple coin game which can be won either by throwing two times heads or by cheating. This game can be modeled by the program below. The probability to win the game is then de\ufb01ned by the success probability Ps(win). 0.5 :: heads(X). 0.2 :: cheat successfully. win :\u2212cheat successfully. win :\u2212heads(1), heads(2). Legal groundings of such facts can also be restricted by providing a domain, as in the following variant of our alarm example where all persons have the same 5 \fprobability of independently hearing the alarm: 0.1 :: burglary. 0.2 :: earthquake 0.7 :: hears alarm(X) :\u2212person(X). person(mary). person(john). person(bob). person(ann). alarm :\u2212earthquake. alarm :\u2212burglary. calls(X) :\u2212alarm, hears alarm(X). call :\u2212calls(X). If such domains are de\ufb01ned purely logically, without using probabilistic facts, the basic distribution is still well de\ufb01ned. It is often assumed that probabilistic facts do not unify with other probabilistic facts or heads of rules. 2.2 Probabilistic Choices As already noted by Sato [1995], probabilistic facts (or binary switches) are expressive enough to represent a wide range of models, including Bayesian networks, Markov chains and hidden Markov models. However, for ease of modeling, it is often more convenient to use multi-valued random variables instead of binary ones. The concept commonly used to realize such variables in the distribution semantics is a probabilistic choice, that is, a \ufb01nite set of ground atoms exactly one of which is true in any possible world. Examples of such choices are the probabilistic alternatives of the Independent Choice Logic (ICL) [Poole, 2000] and probabilistic Horn abduction (PHA) [Poole, 1993], the multi-ary random switches of PRISM [Sato and Kameya, 2001], the probabilistic clauses of stochastic logic programs (SLPs) [Muggleton, 1996], and the annotated disjunctions of logic programs with annotated disjunctions (LPADs) [Vennekens et al., 2004], or the CP-events of CP-logic [Vennekens, 2007]. These are all closely related, e.g., the probabilistic clauses of SLPs map onto the switches of PRISM [Cussens, 2005], and the probabilistic alternatives of ICL onto annotated disjunctions (and vice versa) [Vennekens et al., 2004]. We therefore restrict the following discussion to annotated disjunctions [Vennekens et al., 2004], using the notation introduced below. An annotated disjunction (AD) is an expression of the form p1 :: h1; . . . ; pN :: hN :\u2212b1, . . . , bM. where b1, . . . , bM is a possibly empty conjunction of literals, the pi are probabilities and PN i=1 pi \u22641. Considered in isolation, an annotated disjunction states that if the body b1, . . . , bM is true at most one of the hi is true as well, where the choice is governed by the probabilities (see below for interactions between multiple ADs with unifying atoms in the head). If the pi in an annotated disjunction do not sum to 1, there is also the case that nothing is chosen. The probability of this event is 1 \u2212Pn i=1 pi. A probabilistic fact is thus a special case of an AD with a single head atom and empty body. 6 \fFor instance, consider the following program: 0.4 :: draw. 1 3 :: color(green); 1 3 :: color(red); 1 3 :: color(blue) :\u2212draw. The probabilistic fact states that we draw a ball from an urn with probability 0.4, and the annotated disjunction states that if we draw a ball, the color is picked uniformly among green, red and blue. As for probabilistic facts, a nonground AD denotes the set of all its groundings, and for each such grounding, choosing one of its head atoms to be true is seen as an independent random event. That is, the annotated disjunction 1 3 :: color(B, green); 1 3 :: color(B, red); 1 3 :: color(B, blue) :\u2212ball(B). de\ufb01nes an independent probabilistic choice of color for each ball B. As noted already by Vennekens et al. [2004], the probabilistic choice over head atoms in an annotated disjunction can equivalently be expressed using a set of logical clauses, one for each head, and a probabilistic choice over facts added to the bodies of these clauses, e.g. color(B, green) :\u2212ball(B), choice(B, green). color(B, red) :\u2212ball(B), choice(B, red). color(B, blue) :\u2212ball(B), choice(B, blue). 1 3 :: choice(B, green); 1 3 :: choice(B, red); 1 3 :: choice(B, blue). This example illustrates that annotated disjunctions de\ufb01ne a distribution PF over basic facts as required in the distribution semantics, but can simplify modeling by directly expressing probabilistic consequences. As mentioned above, a probabilistic fact directly corresponds to an annotated disjunction with a single atom in the head and an empty body. Conversely, each annotated disjunction can \u2013 for the purpose of calculating success probabilities \u2013 be equivalently represented using a set of probabilistic facts and deterministic clauses, which together simulate a sequential choice mechanism; we refer to Appendix B for details. Independent Causes Some languages, e.g. ICL [Poole, 2008], assume that head atoms in the same or di\ufb00erent annotated disjunctions cannot unify with one another, while others, e.g., LPADs [Vennekens et al., 2004], do not make this restriction, but instead view each annotated disjunction as an independent cause for the conclusions to hold. In that case, the structure of the program de\ufb01nes the combined e\ufb00ect of these causes, similarly to how the two clauses for alarm in our earlier example (1) combine the two causes burglary and earthquake. We illustrate this on the Russian roulette example by Vennekens et al. [2009], which involves two guns. 1 6 :: death:\u2212 pull trigger(left gun). 1 6 :: death:\u2212 pull trigger(right gun). Each gun is an independent cause for death. Pulling both triggers will result in death being true with a probability of 1 \u2212(1 \u22121 6)2, which exactly corresponds 7 \fto the probability of death being proven via the \ufb01rst or via the second annotated disjunction (or both). Assuming independent causes closely corresponds to the noisy-or combining rule that is often employed in the Bayesian network literature, cf. Section 5. 2.3 Inference Tasks In probabilistic programming and statistical relational learning, the following inference tasks have been considered: \u2022 In the SUCC(q) task, a ground query q is given, and the task is to compute SUCC(q) = Ps(q), the success probability of the query as speci\ufb01ed in Equation (4).2 \u2022 In the MARG(Q | e) task, a set Q of ground atoms of interest, the query atoms, and a ground query e, the evidence, are given. The task is to compute the marginal probability distribution of each atom q \u2208Q given the evidence, Ps(q|e) = Ps(q \u2227e) Ps(e) . The SUCC(q) task corresponds to the special case of the MARG(Q | e) task with Q = {q} and e = true (and thus Ps(e) = 1). \u2022 The MAP(Q | e) task is to \ufb01nd the most likely truth-assignment q to the atoms in Q given the evidence e, that is, to compute MAP(Q | e) = arg max q Ps(Q = q|e) \u2022 The MPE(U | e) task is to \ufb01nd the most likely world where the given evidence query e holds. Let U be the set of all atoms in the Herbrand base that do not occur in e. Then, the task is to compute the most likely truth-assignment u to the atoms in U, MPE(e) = MAP(U | e). \u2022 In the V IT(q) task, a query q is given, and the task is to \ufb01nd a Viterbi proof of q. Let E(q) be the set of all explanations or proofs of q, that is, of all sets F \u2032 of ground probabilistic atoms for which q is true in the corresponding possible world. Then, the task is to compute V IT(q) = arg max X\u2208E(q) Ps( ^ f\u2208X f). To illustrate, consider our initial alarm example (1) with e = calls(mary) and Q = {burglary, calls(john)}. The worlds where the evidence holds are listed in Table 1, together with their probabilities. The answer to the MARG task is Ps(burglary|calls(mary)) = 0.07/0.196 = 0.357 and Ps(calls(john)|calls(mary)) = 2Non-ground queries have been considered as well, in which case the success probability corresponds to the probability that q\u03b8 is true for some grounding substitution \u03b8. 8 \f0.0784/0.196 = 0.4. The answer to the MAP task is burglary=false, calls(john)=false, as its probability 0.0756/0.196 is higher than 0.028/0.196 (for true, true), 0.042/0.196 (for true, false) and 0.0504/0.196 (for false, true). The world returned by MPE is the one corresponding to the set of facts {earthquake, hears_alarm(mary)}. Finally, the Viterbi proof of query calls(mary) is e \u2227hm, as 0.2 \u00b7 0.7 > 0.1 \u00b7 0.7 (for b \u2227hm). 3 Inference We now provide an overview of existing inference approaches in probabilistic (logic) programming. As most existing work adresses the SUCC task of computing success probabilities, cf. Equation (4), we focus on this task here, and mention other tasks in passing where appropriate. For simplicity, we assume probabilistic facts as basic building blocks. Computing marginals under the distribution semantics has to take into account both probabilistic and logical aspects. We therefore distinguish between exact inference and approximation using either bounds or sampling on the probabilistic side, and between methods based on forward and backward reasoning and grounding to CNF on the logical side. Systems implementing (some of) these approaches include the ICL system AILog23, the PRISM system4, the ProbLog implementations ProbLog15 and ProbLog26, and the LPAD implementations cplint7 and PITA8. General statements about systems in the following refer to these six systems. 3.1 Exact Inference As most methods for exact inference can be viewed as operating (implicitly or explicitly) on a propositional logic representation of all possible worlds that entail the query q of interest, we \ufb01rst note that this set of possible worlds is given by the following formula in disjunctive normal form (DNF) DNF(q) = _ F \u2032\u2286F \u2203\u03b8F \u2032\u222aR| =q\u03b8 \uf8eb \uf8ed^ fi\u2208F \u2032 fi \u2227 ^ fi\u2208F \\F \u2032 \u00acfi \uf8f6 \uf8f8 (6) and that the structure of this formula exactly mirrors that of Equation (5) de\ufb01ning the success probability in the case of probabilistic facts, where we replace summation by disjunction, multiplication by conjunction, and probabilities by truth values of random variables (or facts). In our initial alarm example (1), the DNF corresponding to calls(mary) contains the worlds shown in Table 1, and thus is (b \u2227e \u2227hm \u2227hj) \u2228(b \u2227e \u2227hm \u2227\u00achj) \u2228(b \u2227\u00ace \u2227hm \u2227hj) (7) \u2228(b \u2227\u00ace \u2227hm \u2227\u00achj) \u2228(\u00acb \u2227e \u2227hm \u2227hj) \u2228(\u00acb \u2227e \u2227hm \u2227\u00achj). 3http://artint.info/code/ailog/ailog2.html 4http://sato-www.cs.titech.ac.jp/prism/ 5included in YAP Prolog, http://www.dcc.fc.up.pt/~vsc/Yap/ 6http://dtai.cs.kuleuven.be/problog/ 7included in YAP Prolog, http://www.dcc.fc.up.pt/~vsc/Yap/ 8included in XSB Prolog, http://xsb.sourceforge.net/ 9 \f{draw} {} {not(draw)} {draw,red} {draw,green} {draw,blue} 0.4 0.6 0.2 0.7 0.1 0.08 0.28 0.04 0.6 Figure 1: Forward reasoning example. Forward Reasoning: Following the de\ufb01nition of the semantics of CP-logic [Vennekens et al., 2009], forward reasoning can be used to build a tree whose leaves correspond to possible worlds, on which success probabilities can be calculated. Speci\ufb01cally, the root of the tree is the empty set, and in each node, one step of forward reasoning is executed, creating a child for each possible outcome in the case of probabilistic facts or annotated disjunctions. For instance, consider the program 0.4 :: draw. 0.2 :: green; 0.7 :: red; 0.1 :: blue :\u2212draw. As illustrated in Figure 1, the \ufb01rst step using the probabilistic fact draw adds two children to the root, one containing draw, and one containing not(draw). In the latter case, the body of the AD is false and thus no further reasoning steps are possible. For the world where draw is true, the AD introduces three children, adding green, red and blue, respectively, and no further reasoning steps are possible in the resulting worlds. Thus, each path from the root to a leaf constructs one possible world, whose probability is the product of assignments made along the path. Domains for non-ground facts have to be explicitly provided to ensure termination. While this approach clearly illustrates the semantics, even in the \ufb01nite case, it su\ufb00ers from having to enumerate all possible worlds, and is therefore not used in practice. Backward Reasoning: Probably the most common inference strategy in probabilistic logic programming is to collect all possible proofs or explanations of a given query using backward reasoning, represent them in a suitable data structure, and compute the probability on that structure. As discussed in Section 2.3, an explanation is a partial truth value assignment to probabilistic facts that is su\ufb03cient to prove the query via SLD-resolution. For instance, b \u2227hm is the explanation for calls(mary) given by the derivation discussed in Section 2.1 (page 5), as it depends on burglary and hears_alarm(mary) being true, but not on any particular truth values of earthquake and hears_alarm(john). This query has a second proof, e \u2227hm, obtained by using the \ufb01rst clause for alarm during backward reasoning. We can describe the set of possible worlds where calls(mary) is true by the disjunction of all proofs of the query, 10 \f(b \u2227hm) \u2228(e \u2227hm), which is more compact than the disjunction (7) explicitly listing the six possible worlds. We cannot, however, calculate the probability of this more compact DNF by simply replacing conjunction by multiplication and disjunction by addition as we did for the longer DNF above. The reason is that the two proofs are not mutually exclusive, that is, they can be true in the same possible world. Speci\ufb01cally, in our example this holds for the two worlds b \u2227e \u2227hm \u2227hj and b \u2227e \u2227hm \u2227\u00achj, and the probability of these worlds, 0.1\u00b70.2\u00b70.7\u00b70.4+0.1\u00b70.2\u00b70.7\u00b7(1\u22120.4) = 0.014 is exactly the di\ufb00erence between 0.21 as obtained by the direct sum of products 0.1 \u00b7 0.7 + 0.2 \u00b7 0.7 and the true probability 0.196. This is also known as the disjoint-sum-problem, which is #Pcomplete [Valiant, 1979]. Existing languages and systems approach the problem from di\ufb00erent angles. PHA [Poole, 1992] and PRISM [Sato and Kameya, 2001] rely on the exclusive explanation assumption, that is, they assume that the structure of the program guarantees mutual exclusiveness of all conjunctions in the DNF, which allows one to evaluate it as a direct sum of products (as done in the PRISM system). This assumption allows for natural modeling of many models, including e.g., probabilistic grammars and Bayesian networks, but prevents direct modeling of e.g., connection problems over uncertain graphs where each edge independently exists with a certain probability, or simple variations of Bayesian network models such as our running example. ICL [Poole, 2000] is closely related to PHA, but does not assume exclusive explanations. Poole instead suggests symbolic disjoining techniques to split explanations into mutually exclusive ones (implemented in AILog2). The ProbLog1 implementation of ProbLog [De Raedt et al., 2007, Kimmig et al., 2011a] has been the \ufb01rst probabilistic programming system representing DNFs as Binary Decision Diagrams (BDDs), an advanced data structure that disjoins explanations. This technique has subsequently also been adopted for ICL and LPADs in the cplint and PITA systems [Riguzzi, 2009, Riguzzi and Swift, 2011]. AILog2 and cplint also support computing conditional probabilities. Riguzzi [2013c] has introduced an approach called PITA(OPT) that automatically recognizes certain independencies that allow one to avoid the use of disjoining techniques when computing marginal probabilities. Given its focus on proofs, backward reasoning can easily be adapted to solve the VIT task of \ufb01nding most likely proofs, as done in the PRISM, ProbLog1 and PITA systems. Reduction to Weighted Model Counting: A third way to approach the logic side of inference in probabilistic logic programming has been suggested by Fierens et al. [2011, 2014], who use the propositional logic semantics of logic programming to reduce MARG inference to weighted model counting (WMC) and MPE inference to weighted MAX-SAT. The \ufb01rst step again builds a Boolean formula representing all models where the query is true, but this time, using conjunctive normal form (CNF), and associating a weight with every literal in the formula. More speci\ufb01cally, it grounds the parts of the logic program relevant to the query (that is, the rule groundings contributing to a proof of the query, as determined using backward reasoning), similar to what happens in answer set programming, transforms this ground program into an equivalent CNF based on the semantics of logic programming, and de\ufb01nes the weight function for the second step using the given probabilities. The second step can then use any existing approach to WMC or weighted MAX-SAT, such as representing 11 \fthe CNF as an sd-DNNF, a data structure on which WMC can be performed e\ufb03ciently. For instance, the relevant ground program for calls(mary) in our initial alarm example (1) is 0.1 :: burglary. 0.7 :: hears alarm(mary). 0.2 :: earthquake. alarm :\u2212earthquake. alarm :\u2212burglary. calls(mary) :\u2212alarm, hears alarm(mary). Next, the rules in the ground program are translated to equivalent formulas in propositional logic, taking into account that their head atoms can only be true if a corresponding body is true: alarm \u2194earthquake \u2228burglary calls(mary) \u2194alarm \u2227hears alarm(mary) The conjunction of these formulas is then transformed into CNF as usual in propositional logic. The weight function assigns the corresponding probabilities to literals of probabilistic facts, e.g., w(burglary) = 0.1, w(\u00acburglary) = 0.9, and 1.0 to all other literals, e.g., w(calls(mary)) = w(\u00accalls(mary)) = 1.0. The weight of a model is the product of all literal weights, and the WMC of a formula the sum of weights of all its models, which exactly corresponds to the success probability. Evidence can directly be incorporated by conjoining it with the CNF. Exact MARG inference using this approach is implemented in ProbLog2. Lifted Inference is the topic of a lot of research in statistical relational learning today [Kersting, 2012, Poole, 2003]. Lifted inference wants to realize probabilistic logic inference at the lifted, that is, non-grounded level in the same way that resolution realizes this for logical inference. The problem of lifted inference can be illustrated on the following example (cf. also Poole [2008]): p :: famous(Y). popular(X) :\u2212friends(X, Y), famous(Y). In this case Ps(popular(john)) = 1 \u2212(1 \u2212p)m where m is the number of friends of john, that is, to determine the probability that john is popular, it su\ufb03ces to know how many friends john has. We do not need to know the identities of these friends, and hence, need not ground the clauses. Various techniques for lifted inference have been obtained over the past decade. For instance, Poole [2003] shows how variable elimination, a standard approach to probabilistic inference in graphical models, can be lifted and Van den Broeck et al. [2011] studied weighted model counting for \ufb01rst order probabilistic logic using a generalization of d-DNNFs for \ufb01rst order logic. Lifted inference techniques are \u2013 to the best of our knowledge \u2013 not yet supported by 12 \fcurrent probabilistic logic programming language implementations, which explains why we do not provide more details in this paper. It remains a challenge for further work. A recent survey on lifted inference is provided by Kersting [2012]. 3.2 Approximate Inference using Bounds As the probability of a set of possible worlds monotonically increases if more models are added, hard lower and upper bounds on the success probability can be obtained by considering a subset or a superset of all possible worlds where a query is true. For instance, let W be the set of possible worlds where a query q holds. The success probability of q thus is the sum of the probabilities of all worlds in W. If we restrict this sum to a subset of W, we obtain a lower bound, and an upper bound if we sum over a superset of W. In our example, as calls(mary) is true in b \u2227e \u2227hm \u2227hj, but false in b \u2227e \u2227\u00achm \u2227hj, we have 0.1 \u00b7 0.2 \u00b7 0.7 \u00b7 0.4 \u2264Ps(calls(mary)) \u22641 \u2212(0.1 \u00b7 0.2 \u00b7 (1 \u22120.7) \u00b7 0.4). In practice, this approach is typically used with the DNF obtained by backward reasoning, that is, the set of proofs of the query, rather than with the possible worlds directly. This has initially been suggested for PHA by Poole [1992], and later also been adapted for ProbLog [De Raedt et al., 2007, Kimmig et al., 2008] and LPADs [Bragaglia and Riguzzi, 2011]. The idea is to maintain a set of partial derivations during backward reasoning, which allows one to, at any point, obtain a lower bound based on all complete explanations or proofs found so far, and an upper bound based on those together with all partial ones (based on the assumption that those will become proofs with probability one). For instance, (e \u2227hm) \u2228b provides an upper bound of 0.226 for the probability of calls(mary) based on the proof e \u2227hm (which provides the corresponding lower bound 0.14) and the partial derivation b (which still requires to prove hears_alarm(mary)). Di\ufb00erent search strategies are possible here, including e.g., iterative deepening or best \ufb01rst search. Lower bounds based on a \ufb01xed number of proofs have been proposed as well, either using the k explanations with highest individual probabilities [Kimmig et al., 2011a], or the k explanations chosen by a greedy procedure that maximizes the probability an explanation adds to the one of the current set [Renkens et al., 2012]. Approximate inference using bounds is available in ProbLog1, cplint, and ProbLog2. 3.3 Approximate Inference by Sampling While probabilistic logic programming often focuses on exact inference, approximate inference by sampling is probably the most popular approach to inference in many other probabilistic languages. Sampling uses a large number of random executions or randomly generated possible worlds, from which the probability of a query is estimated as the fraction of samples where the query holds. For instance, samples can be generated by randomly choosing truth values of probabilistic facts as needed during backward reasoning, until either a proof is found or all options are exhausted [Kimmig et al., 2008, Bragaglia and Riguzzi, 2011, Riguzzi, 2013b]. Fierens et al. [2014] have used MC-SAT [Poon and Domingos, 2006] to perform approximate WMC on the CNF representing all models. Systems for languages that specify generative models, such as BLOG [Milch 13 \fet al., 2005] and distributional clauses [Gutmann et al., 2011], cf. Sec. 4.2, often use forward reasoning to generate samples. A popular approach to sampling are MCMC algorithms, which, rather than generating each sample from scratch, generate a sequence of samples by making random modi\ufb01cations to the previous sample based on a so-called proposal distribution. This approach has been used e.g., for the probabilistic functional programming language Church [Goodman et al., 2008], for BLOG [Arora et al., 2010], and for the probabilistic logic programming languages PRISM [Sato, 2011] and ProbLog [Moldovan et al., 2013]. ProbLog1 and cplint provide inference techniques based on backward sampling, and the PRISM system includes MCMC inference. 4 Probabilistic Programming Concepts While probabilistic programming languages based on the distribution semantics as discussed so far are expressive enough for a wide range of models, an important part of their power is their support for additional programming concepts. Based on primitives used in a variety of probabilistic languages, we discuss a range of such concepts next, also touching upon their implications for inference. 4.1 Flexible Probabilities A probabilistic fact with \ufb02exible probability is of the form P :: atom where atom contains the logical variable P that has to be instantiated to a probability when using the fact. The following example models drawing a red ball from an urn with R red and G green balls, where each ball is drawn with uniform probability from the urn: Prob :: red(Prob). draw red(R, G):\u2212 Prob is R/(R + G), red(Prob). The combination of \ufb02exible probabilities and Prolog code o\ufb00ers a powerful tool to compute probabilities on-the-\ufb02y, cf. e.g., [Poole, 2008]. Flexible probabilities have also been used in extended SLPs [Angelopoulos and Cussens, 2004], and are supported by the probabilistic logic programming systems AILog2, ProbLog1, cplint and ProbLog2. Indeed, probabilistic facts with \ufb02exible probabilities are easily supported by backward inference as long as these facts are ground on calling, but cannot directly be used with exact forward inference, as they abbreviate an in\ufb01nite set of ground facts and thus would create an in\ufb01nite tree of possible worlds.9 4.2 Distributional Clauses Annotated disjunctions \u2013 as speci\ufb01ed in Section 2.2 \u2013 are of limited expressivity, as they can only de\ufb01ne distributions over a \ufb01xed, \ufb01nite number of head elements. While more \ufb02exible discrete distributions can be expressed using a combination of \ufb02exible probabilities and Prolog code, this may require signi\ufb01cant programming e\ufb00ort. Gutmann et al. [2010] introduce Hybrid ProbLog, an 9If only \ufb01nitely many di\ufb00erent instances of such a fact are relevant for any possible world of a given program, a mechanism similarly to the magic set transformation [Bancilhon et al., 1986] may circumvent this problem. 14 \fextension of ProbLog to continuous distributions, but their inference approach based on exact backward reasoning and discretization severely limits the use of such distributions. To alleviate these problems, distributional clauses were introduced by Gutmann et al. [2011], whom we closely follow. A distributional clause is a clause of the form h \u223cD :b1, . . . , bn. where \u223cis a binary predicate used in in\ufb01x notation. Similarly to annotated disjunctions, the head (h \u223cD)\u03b8 of a distributional clause is de\ufb01ned for a grounding substitution \u03b8 whenever (b1, . . . , bn)\u03b8 is true in the semantics of the logic program. Then the distributional clause de\ufb01nes the random variable h\u03b8 as being distributed according to the associated distribution D\u03b8. Possible distributions include \ufb01nite discrete distributions such as a uniform distribution, discrete distributions over in\ufb01nitely many values, such as a Poisson distribution, and continuous distributions such as Gaussian or Gamma distributions. The outcome of a random variable h is represented by the term \u2243(h). Both random variables h and their outcome \u2243(h) can be used as other terms in the program. However, the typical use of terms \u2243(h) is inside comparison predicates such as equal/2 or lessthan/2. In this case these predicates act in the same way as probabilistic facts in Sato\u2019s distribution semantics. Indeed, depending on the value of \u2243(h) (which is determined probabilistically) they will be true or false. Consider the following distributional clause program. color(B) \u223cdiscrete((0.7 : green), (0.3 : blue)) :ball(B). diameter(B, MD) \u223cgamma(MD1, 20) :mean diameter(\u2243(color(B)), MD), MD1 is MD/20. mean diameter(green, 15). mean diameter(blue, 25). ball(1). ball(2). . . . ball(k). The \ufb01rst clause states that for every ball B, there is a random variable color(B) whose value is either green (with probability 0.7) or blue (with probability 0.3). This discrete distribution directly corresponds to the one given by the annotated disjunction 0.7 :: color(B, green); 0.3 :: color(B, blue) :ball(B). The second distributional clause in the example de\ufb01nes a random variable diameter(B,MD) for each ball B. This random variable follows a Gamma distribution with parameters MD/20 and 20, where the mean diameter MD depends on the color of the ball. Distributional clauses are the logic programming equivalent of the mechanisms employed in statistical relational languages such as Bayesian Logic (BLOG) [Milch et al., 2005], Church [Goodman et al., 2008] and IBAL [Pfe\ufb00er, 2001], which also use programming constructs to de\ufb01ne generative process that can de\ufb01ne new variables in terms of existing one. As we have seen in the example, annotated disjunctions can easily be represented as distributional clauses with \ufb01nite, discrete distributions. However, distributional clauses are more expressive than annotated disjunctions (and the standard distribution semantics) as they can also represent continuous distributions. 15 \fPerforming inference with distributional clauses raises some extra di\ufb03culties (see [Gutmann et al., 2011] for more details). The reason for this is that continuous distributions (such as a Gaussian or a Gamma-distribution) have uncountable domains. Typical inference with constructs such as distributional clauses will therefore resort to sampling approaches in order to avoid the need for evaluating complex integrals. It is quite natural to combine sampling for distributional clauses with forward reasoning10, realizing a kind of generative process, though more complex strategies are also possible, cf. [Gutmann et al., 2011]. 4.3 Unknown Objects One of the key contributions of Bayesian Logic (BLOG) [Milch et al., 2005] is that it allows one to drop two common assumptions, namely the closed world assumption (all objects in the world are known in advance) and the unique names assumption (di\ufb00erent terms denote di\ufb00erent objects), which makes it possible to de\ufb01ne probability distributions over outcomes with varying sets of objects. This is achieved by de\ufb01ning generative processes that construct possible worlds, where the existence and the properties of objects can depend on objects created earlier in the process. As already shown by Poole [2008], such generative processes with an unknown number of objects can often be modeled using \ufb02exible probabilities and Prolog code to specify a distribution over the number of objects as done in BLOG. Distributional clauses simplify this modeling task, as they make introducing a random variable corresponding to this number straightforward. We can then use the between/3 predicate to enumerate the objects in de\ufb01nitions of predicates that refer to them, cf. also [Poole, 2008]. Below, the random variable nballs stands for the number of balls, which is Poisson distributed with \u03bb = 6. For each possible value \u2243(nballs), the corresponding number of balls are generated which are identi\ufb01ed by the numbers 1, 2, . . . , \u2243(nballs). nballs \u223cpoisson(6). ball(N) : \u2212between(1, \u2243(nballs), N). 4.4 Stochastic Memoization A key concept in the probabilistic functional programming language Church [Goodman et al., 2008] is stochastic memoization. If a random variable in Church is memoized, subsequent calls to it simply look up the result of the \ufb01rst call, similarly to tabling in logic programming. On the other hand, for random variables that are not memoized, each reference to the variable corresponds to an independent draw of an outcome. In contrast to Church, probabilistic logic programming languages and their implementations typically do not leave this choice to the user. In ICL, ProbLog, LPADs and the basic distribution semantics as introduced in [Sato, 1995], each ground probabilistic fact directly corresponds to a random variable, i.e., within a possible world, each occurrence of such a fact has the same truth value, and the fact is thus memoized. Furthermore, the probability of the fact is taken into account once when calculating 10Valid distributional clause programs are required to have \ufb01nite support, which ensures termination. 16 \fthe probability of a proof, independently of the number of times it occurs in that proof. While early versions of PRISM [Sato, 1995, Sato and Kameya, 1997] used binary or n-ary probabilistic choices with an argument that explicitly distinguished between di\ufb00erent calls, this argument has been made implicit later on [Sato and Kameya, 2001], meaning that the PRISM implementation never memoizes the outcome of a random variable. The di\ufb00erence between the two approaches can be explained using the following example. For the AD ( 1 3 :: color(green); 1 3 :: color(red); 1 3 :: color(blue)), there are three answers to the goal (color(X),color(Y)), one answer X = Y = c for each color c with probability 1 3, as exactly one of the facts color(c) is true in each possible world when memoizing color (as in ProbLog and ICL). Asking the same question when color is not memoized (as in PRISM) results in 9 possible answers with probability 1 9 each. The query then \u2013 implicitly \u2013 corresponds to an ICL or ProbLog query (color(X,id1), color(Y,id2)), where the original AD is replaced by a non-ground variant ( 1 3 :: color(green, ID); 1 3 :: color(red, ID); 1 3 :: color(blue, ID)) and id1 and id2 are trial identi\ufb01ers that are unique to the call. Avoiding the memoization of probabilistic facts is necessary in order to model stochastic automata, probabilistic grammars, or stochastic logic programs [Muggleton, 1996] under the distribution semantics. There, a new rule is chosen randomly for each occurrence of the same nonterminal state/symbol/predicate within a derivation, and each such choice contributes to the probability of the derivation. The rules for a nonterminal thus form a family of independent identically distributed random variables, and each choice is automatically associated with one variable from this family. Consider the following stochastic logic program. It is in fact a fragment of a stochastic de\ufb01nite clause grammar; the rules essentially encode the probabilistic context free grammar rules de\ufb01ning 0.3 : vp \u2192verb, 0.5 : vp \u2192verb, np and 0.2 : vp \u2192verb, pp. There are three rules for the non-terminal vp and each of them is chosen with an associated probability. Furthermore, the sum of the probabilities for these rules equals 1. 0.3 : vp(H, T) :\u2212verb(H, T). 0.5 : vp(H, T) :\u2212verb(H, H1), np(H1, T). 0.2 : vp(H, T) :\u2212verb(H, H1), pp(H1, T). This type of stochastic grammar can easily be simulated in the distribution semantics using one dememoized AD (or switch) for each non-terminal, a rule calling the AD to make the selection, and a set of rules linking the selection to 17 \fthe SLP rules:11 dememoize 0.3 :: vp sel(rule1); 0.5 :: vp sel(rule2); 0.2 :: vp sel(rule3). vp(H, T) :\u2212vp sel(Rule), vp rule(Rule, H, T). vp rule(rule1, H, T) :\u2212verb(H, T). vp rule(rule2, H, T) :\u2212verb(H, H1), np(H1, T). vp rule(rule3, H, T) :\u2212verb(H, H1), pp(H1, T). All inference approaches discussed here naturally support stochastic memoization; this includes the ones implemented in AILog2, ProbLog1, ProbLog2, cplint and PITA. The PRISM system uses exact inference based on backward reasoning in the setting without stochastic memoization. In principle, stochastic memoization can be disabled in backward reasoning by automatically adding a unique identi\ufb01er to each occurrence of the same random variable. However, for techniques that build propositional representations di\ufb00erent from mutually exclusive DNFs (such as the DNFs of BDD-based methods and the CNFs when reducing to WMC), care is needed to ensure that these identi\ufb01ers are correctly shared among di\ufb00erent explanations when manipulating these formulas. Backward sampling can easily deal with both memoized and dememoized random variables. As only one possible world is considered at any point, each repeated occurrence of the same dememoized variable is simply sampled independently, whereas the \ufb01rst result sampled within the current world is reused for memoized ones. Forward sampling cannot be used without stochastic memoization, as it is unclear up front how many instances are needed. MCMC methods have been developed both for ProbLog (with memoization) and PRISM (without memoization). 4.5 Constraints In knowledge representation, answer set programming and databases, it is common to allow the user to specify constraints on the possible models of a theory. In knowledge representation, one sometimes distinguishes inductive de\ufb01nitions (such as the de\ufb01nite clauses used in logic programming) from constraints. The former are used to de\ufb01ne predicates, the latter impose constraints on possible worlds. While the use of constraints is still uncommon in probabilistic logic programming12 it is conceptually easy to accommodate this when working with the distribution semantics, cf. Fierens et al. [2012]. While such constraints can in principle be any \ufb01rst-order logic formula, we will employ clausal constraints here. A clausal constraint is an expression of the form h1; . . . ; hN :\u2212b1, . . . , bM. where the hi and bj are literals. The constraint speci\ufb01es that whenever (b1 . . . bM)\u03b8 is true for a substitution \u03b8 grounding the clause at least one of the hi\u03b8 must also 11The dememoize keyword is used for clarity here; it is not supported by existing systems. 12Hard and soft constraints are used in Markov Logic [Richardson and Domingos, 2006], but Markov Logic does not support inductive de\ufb01nitions as this requires a least Herbrand semantics, cf. Fierens et al. [2012]. 18 \fbe true. All worlds in which a constraint is violated become impossible, that is, their probability becomes 0. Constraints are very useful for specifying complex properties that possible worlds must satisfy. To illustrate constraints, reconsider the alarm example and assume that it models a situation in the 1930s where there is only one phone available in the neighborhood implying that at most one person can call. This could be represented by the constraint X = Y :\u2212calls(X), calls(Y). Imposing this constraint would exclude all worlds in which both Mary and John hear the alarm and call. The total probability mass for such worlds is 0.4\u00b70.8 = 0.32. By excluding these worlds, one looses probability mass and thus has to normalize the probabilities of the remaining possible worlds. For instance, the possible world corresponding to the truth value assignment burglary=true, earthquake=false, hears_alarm(mary)=true, hears_alarm(john)=false yielded a probability mass of 0.1 \u00b7 (1 \u22120.2) \u00b7 0.7 \u00b7 (1 \u22120.6) = 0.0336 without constraints. Now, when enforcing the constraint, one obtains 0.0336/(1\u22120.32). Thus the semantics of constraints correspond to computing conditional probabilities where one conditions on the constraints being satis\ufb01ed. Handling constraints during inference has not been a focus of inference in probabilistic logic programming, and \u2013 to the best of our knowledge \u2013 no current system provides explicit support for constraints. 4.6 Negation as Failure So far, we have only considered probabilistic programs using de\ufb01nite clauses, that is, programs that only use positive literals in clause bodies, as those are guaranteed to have a unique model for any truth value assignment to basic probabilistic events. It is however possible to adopt Prolog\u2019s negation as failure on ground literals under the distribution semantics, as long as all truth values of derived atoms are still uniquely determined by those of the basic facts, cf., e.g., [Poole, 2000, Sato et al., 2005, Kimmig et al., 2009, Riguzzi, 2009, Fierens et al., 2014]. Then, in each possible world, any ground query q either succeeds or fails, and its negation not(q) succeeds in exactly those worlds where q fails. Thus, the probability of a ground query not(q) is the sum of the probabilities of all possible worlds that do not entail q. Consider the following variant of our alarm example, where people also call if there is no alarm, but they have gossip to share: 0.1 :: burglary. 0.7 :: hears alarm(mary). 0.2 :: earthquake. 0.4 :: hears alarm(john). 0.3 :: has gossip(mary). 0.6 :: has gossip(john). alarm :\u2212earthquake. alarm :\u2212burglary. calls(X) :\u2212alarm, hears alarm(X). calls(X) :\u2212not(alarm), has gossip(X). call :\u2212calls(X). 19 \fThe new rule for calls(X) can only possibly apply in worlds where not(alarm) succeeds, that is, alarm fails, which are exactly those containing neither burglary nor earthquake. Using gm as shorthand for has_gossip(mary)= true, we obtain the additional explanation \u00ace \u2227\u00acb \u2227gm for calls(mary). Thus, in the presence of negation, explanations no longer correspond to sets of probabilistic facts as in the case of de\ufb01nite clause programs, but to sets of positive and negative literals for probabilistic facts. While not(alarm) has a single explanation in this simple example, in general, explanations for negative literals can be much more complex, as they have to falsify every possible explanation of the corresponding positive literal by \ufb02ipping the truth value of at least one probabilistic fact included in the explanation. Negation as failure can be handled in forward and backward reasoning both for exact inference and for sampling, though forward reasoning has to ensure to proceed in the right order. Exact inference with backward reasoning often bene\ufb01ts from tabling. Negation as failure complicates approximate inference using bounds, as explanations for failing goals have to be considered. AILog2, ProbLog1, ProbLog2, cplint and PITA all support negation as failure in their exact and sampling based approaches. The PRISM system follows the approach proposed by Sato et al. [2005] and compiles negation into a de\ufb01nite clause program with uni\ufb01cation constraints. Current MCMC approaches in probabilistic logic programming do not support negation beyond that of probabilistic facts. 4.7 Second Order Predicates When modeling relational domains, it is often convenient to reason over sets of objects that full\ufb01l certain conditions, for instance, to aggregate certain values over them. In logic programming, this is supported by second order predicates such as findall/3, which collects all answer substitutions for a given query in a list. In the following example, the query sum(S) will \ufb01rst collect all arguments of f/1 into a list and then sum the values using predicate sum_list/2, thus returning S=3. f(1). f(2). sum(Sum) :\u2212findall(X, f(X), L), sum list(L, Sum). Note that in Prolog, the list returned by findall/3 is unique. Under the distribution semantics, however, this list will be di\ufb00erent depending on which possible world is considered. To illustrate this, we replace the de\ufb01nition of f/1 in our example with probabilistic facts: 0.1 :: f(1). 0.2 :: f(2). sum(Sum) :\u2212findall(X, f(X), L), sum list(L, Sum). We now have four sets of facts \u2013 {f(1),f(2)}, {f(1)}, {f(2)}, and {} \u2013 leading to the four possible worlds {f(1),f(2),sum(3)}, {f(1),sum(1)}, {f(2),sum(2)}, and {sum(0)}, as the answer list L is di\ufb00erent in each case. This behavior of second order predicates in the probabilistic setting can pose a challenge to inference. In principle, all inference approaches could deal with 20 \fsecond order predicates. However, exact approaches would su\ufb00er from a blowup, as they have to consider all possible lists of elements \u2013 and thus all possible worlds \u2013 explicitly, whereas in sampling, each sample only considers one such list. As far as we know, the only system with some support for second order predicates is cplint, which allows bagof and setof with one of its backward reasoning modules [Riguzzi, 2013a]. 4.8 Meta-Calls One of the distinct features of programming languages such as Prolog and Lisp is the possibility to use programs as objects within programs, which enables meta-level programming. For their probabilistic extensions, this means reasoning about the probabilities of queries within a probabilistic program, a concept that is central to the probabilistic programming language Church, which builds upon a Lisp dialect [Goodman et al., 2008], and has also been considered with ProbLog [Mantadelis and Janssens, 2011]. Possible uses of such a feature include \ufb01ltering of proofs based on the probability of subqueries, or the dynamic de\ufb01nition of probabilities using queries, e.g., to implement simple forms of combining rules as in the following example, where max_true(G1,G2) succeeds with the success probability of the more likely argument. P :: p(P). max true(G1, G2) :\u2212prob(G1, P1), prob(G2, P2), max(P1, P2, P), p(P). % rest of program (omitted) In this section, we will use prob(Goal,Prob) to refer to an atom returning the success probability Prob of goal Goal, that is, implementing Equation (4). Note that such atoms are independent queries, that is, they do not share truth values of probabilistic facts with other atoms occurring in a derivation they are part of. Finally, if the second argument is a free variable upon calling, the success probability of prob(goal,Prob) is 1. For the sake of simplicity, we will assume here that the second argument will always be free upon calling.13 We extend the example above with the following program. 0.5 :: a. 0.7 :: b. 0.2 :: c. d :\u2212a, not(b). e :\u2212b, c. Querying for max_true(d,e) using backward reasoning will execute two calls to prob/2 in sequence: prob(d,P1) and prob(e,P2). Note that if multiple calls to prob/2 atoms occur in a proof, they are independent, i.e., even if they use the same probabilistic facts, those will (implicitly) correspond to di\ufb00erent copies of the corresponding random variables local to that speci\ufb01c prob/2 call. Put di\ufb00erently, prob/2 encapsulates part of our possible worlds. In the example, b is thus a di\ufb00erent random variable in prob(d,P1) and prob(e,P2). The reason for this encapsulation is twofold: \ufb01rst, the probability of a goal is not in\ufb02uenced by calculating the probability of another (or even the same) event before, and second, as prob/2 summarizes a set of possible worlds, the value of 13This is not a restriction, as prob(Goal,c) is equivalent to prob(Goal,P),P=c. 21 \fa random variable cannot be made visible to the outside world, as it may be di\ufb00erent in di\ufb00erent internal worlds. Indeed, in our example, b needs to be false to prove d, but true to prove e, so using the same random variable would force the top level query to be unprovable. We thus obtain a kind of hierarchically organized world: some probabilistic facts are used in the top level query, others are encapsulated in prob/2 atoms, whose queries might in turn rely on both directly called probabilistic facts and further calls to prob/2. In our example, prob(d,P1) uses random variables corresponding to probabilistic facts a and b, returning P1 = 0.5 \u00b7 (1 \u22120.7) = 0.15, prob(e,P2) uses random variables corresponding to probabilistic facts b and c, returning P2 = 0.7 \u00b7 0.2 = 0.14, and the top level query max_true(d,e) uses probabilistic fact p(0.15) and has probability P(more likely is true(d, e)) = 0.15. The probability of a derivation is determined by the probabilities of the probabilistic facts it uses outside all prob/2 calls. Those facts de\ufb01ne the possible worlds from the point of view of the top level query. In those worlds, the random variables of the encapsulated parts are hidden, as they have been aggregated by prob/2. Returning to our example and abstracting from the concrete remainder of the program, we observe that for any given pair of goals g1,g2 and suitable program de\ufb01ning those goals, max_true(g1,g2) has exactly one proof: the \ufb01rst two body atoms always succeed and return the probabilities of the goals, the third atom deterministically \ufb01nds the maximum m of the two probabilities, and the proof \ufb01nally uses a single random variable p(m) with probability m. Thus, the query indeed succeeds with the probability of the more likely goal. Another example for the use of prob/2 is \ufb01ltering goals based on their probability: almost always false(G) :\u2212prob(G, P), P < 0.00001. % rest of program (omitted) Note that in contrast to the previous example, this is a purely logical decision, that is, the success probability will be either 0 or 1 depending on the goal G. To summarize, using meta-calls to turn probabilities into usable objects in probabilistic logic programming is slightly di\ufb00erent from the other probabilistic programming concepts considered in this paper: it requires a notion of encapsulation or hierarchical world structure and cannot be interpreted directly on the level of individual possible worlds for the entire program. Mantadelis and Janssens [2011] introduce MetaProbLog14, a prototype implementation for ProbLog supporting nested meta-calls based on exact backward inference. As they discuss, meta-calls can be supported by any inference mechanism that can be suspended to perform inference for the query inside the meta-call. Such suspending is natural in backward reasoning, where the proof of a subgoal becomes a call to inference rather than a continuation of backward reasoning. With forward reasoning, such non-ground prob(goal,P) goals raise the same issues as other non-ground facts. Meta-calls of the form prob(goal,P) compute the grounding of P as the goal\u2019s probability, and using approximate inference to compute the latter will thus in\ufb02uence the grounding of such a fact, and therefore potentially also the consequences of this fact. This may a\ufb00ect 14http://people.cs.kuleuven.be/~theofrastos.mantadelis/tools/metaproblog.tar.gz, also supports \ufb02exible probabilities, stochastic memoization, and negation as failure 22 \fs0 s1 a : 0.2 b : 0.8 a : 0.9 b : 0.1 0.7 0.6 0.3 0.5 0.5 0.4 Figure 2: Example HMM the result of inference in unexpected ways, and it is thus unclear in how far approximation approaches are suitable for meta-calls. Goodman et al. [2008] state that supporting meta-calls (or nested queries) in MCMC inference in Church is expected to be straightforward, but do not provide details. Meta-calls are not supported in AILog2, PRISM, ProbLog1, ProbLog2, cplint and PITA. 4.9 Time and Dynamics Among the most popular probabilistic models are those that deal with dynamics and time such as Hidden Markov Models (HMMs) and Dynamic Bayesian Networks. Dynamic models have received quite some attention within probabilistic logic programming. They can naturally be represented using logic programs through the addition of an extra \u201dtime\u201d argument to each of the predicates. We illustrate this by giving two encodings of the Hidden Markov Model shown in Figure 2, where we restrict sequences to a given length (10 in the example). Following Vennekens et al. [2004], this model can be written as a set of annotated disjunctions: length(10). 0.5 :: state(s0, 0); 0.5 :: state(s1, 0). 0.7 :: state(s0, T1); 0.3 :: state(s1, T1):\u2212state(s0, T), length(L), T < L, T1 is T + 1. 0.4 :: state(s0, T1); 0.6 :: state(s1, T1):\u2212state(s1, T), length(L), T < L, T1 is T + 1. 0.2 :: out(a, T); 0.8 :: out(b, T):\u2212state(s0, T). 0.9 :: out(a, T); 0.1 :: out(b, T):\u2212state(s1, T). Alternatively, following Sato and Kameya [1997], but writing PRISM\u2019s multivalued switches as unconditional annotated disjunctions15, the model can be 15In this example, the program structure causes the time argument to act as a unique identi\ufb01er for di\ufb00erent calls to the same AD, thus making memoized ADs and dememoized switches equivalent. 23 \fa b c d T T T T T a b c d 7 5 4 13 9 Figure 3: Example graph illustrating generalized labels: Boolean case (left), shortest path (right). written as follows: 0.2 :: output(s0, a, T) ; 0.8 :: output(s0, b, T). 0.9 :: output(s1, a, T) ; 0.1 :: output(s1, b, T). 0.5 :: init(s0) ; 0.5 :: init(s1). 0.7 :: trans(s0, s0, T) ; 0.3 :: trans(s0, s1, T). 0.4 :: trans(s1, s0, T) ; 0.6 :: trans(s1, s1, T). length(10). hmm(List) :\u2212init(S), hmm(1, S, List). % last time T : hmm(T, S, [Obs]) :\u2212length(T), output(S, Obs, T). % earlier time T : output Obs in state S, transit from S to Next hmm(T, S, [Obs|R]) :\u2212length(L), T < L, output(S, Obs, T), trans(S, Next, T), T1 is T + 1, hmm(T1, Next, R). Forward and backward sampling naturally deal with a time argument (provided time is bounded in the case of forward reasoning). Naively using such a time argument with exact inference results in exponential running times (in the number of time steps), though this can often be avoided using dynamic programming approaches and principles, as shown by the PRISM system, which achieves the same time complexity for HMMs as corresponding special-purpose algorithms [Sato and Kameya, 2001]. Other approaches that have devoted special attention to modeling and inference for dynamics include Logical HMMs [Kersting et al., 2006], a language for modeling HMMs with structured states, CPT-L [Thon et al., 2011], a dynamic version of CP-logic, and the work on a particle \ufb01lter for dynamic distributional clauses [Nitti et al., 2013]. 4.10 Generalized Labels As we have seen in Section 3, computing success probabilities in probabilistic logic programming is closely related to evaluating the truth value of a logical formula. Weighted logic programming languages such as Dyna [Eisner et al., 24 \f2005]16 and aProbLog [Kimmig et al., 2011b] take this observation a step further and replace probabilities (or Boolean truth values) by elements from an arbitrary semiring and corresponding combination operators.17 More speci\ufb01cally, Dyna assigns labels to ground facts in a logic program and computes weights of atoms in the heads of clauses as follows: conjunction (,) in clause bodies is replaced by semiring multiplication \u2297, that is, the weight of a body is the \u2297-product of the weights of its atoms, and if multiple clauses share the same head atom, this atom\u2019s weight is the \u2295-sum of the corresponding bodies, that is, :\u2212is replaced by semiring addition \u2295. We illustrate the idea with a logic program de\ufb01ning reachability in a directed graph adapted from Cohen et al. [2008]: reachable(S) :\u2212initial(S). reachable(S) :\u2212reachable(R), edge(R, S). which in Dyna is interpreted as a system of (recursive) semiring equations reachable(S) \u2295= initial(S). reachable(S) \u2295= reachable(R) \u2297edge(R, S). To get the usual logic programming semantics, we can combine this program with facts labeled with values from the Boolean semiring (with \u2297= \u2227and \u2295= \u2228): initial(a) = T edge(a, b) = T edge(a, d) = T edge(b, c) = T edge(d, b) = T edge(d, c) = T which means that the weights of reachable atoms are computed as follows: reachable(a) = initial(a) = T reachable(d) = reachable(a) \u2227edge(a, d) = T reachable(b) = reachable(a) \u2227edge(a, b) \u2228reachable(d) \u2227edge(d, b) = T reachable(c) = reachable(b) \u2227edge(b, c) \u2228reachable(d) \u2227edge(d, c) = T Alternatively, one can label facts with non-negative numbers denoting costs and use \u2297= + and \u2295= min to describe single-source shortest paths: initial(a) = 0 edge(a, b) = 7 edge(a, d) = 5 edge(b, c) = 13 edge(d, b) = 4 edge(d, c) = 9 resulting in evaluation reachable(a) = initial(a) = 0 reachable(d) = reachable(a) + edge(a, d) = 5 reachable(b) = min(reachable(a) + edge(a, b), reachable(d) + edge(d, b)) = 7 reachable(c) = min(reachable(b) + edge(b, c), reachable(d) + edge(d, c)) = 14 16Dyna is currently being extended into a more general language [Eisner and Filardo, 2011], but we consider the initial version here, as that one is more closely related to the probabilistic programming languages we discuss. 17A semiring is a structure (A, \u2295, \u2297, e\u2295, e\u2297), where addition \u2295is an associative and commutative binary operation over the set A, multiplication \u2297is an associative binary operation over the set A, \u2297distributes over \u2295, e\u2295\u2208A is the neutral element of \u2295, i.e., for all a \u2208A, a \u2295e\u2295= a, e\u2297\u2208A is the neutral element of \u2297, i.e., for all a \u2208A, a \u2297e\u2297= a, and for all a \u2208A, e\u2295\u2297a = a \u2297e\u2295= e\u2295. In a commutative semiring, \u2297is commutative as well. 25 \fThat is, the values of reachable atoms now correspond to the length of the shortest path rather than the existence of a path. Given its origins in natural language processing, Dyna is closely related to PRISM in two aspects. First, it does not memoize labeled facts, but takes into account their weights each time they appear in a derivation, generalizing how each use of a rule in a probabilistic grammar contributes to a derivation. Second, again as in probabilistic grammars, it sums the weights of all derivations, but in contrast to PRISM or grammars does not require them to be mutually exclusive to do so. The inference algorithm of basic Dyna as given by Eisner et al. [2005]18 computes weights by forward reasoning, keeping intermediate results in an agenda and updating them until a \ufb01xpoint is reached, though other execution strategies could be used as well, cf. [Eisner and Filardo, 2011]. As Dyna, aProbLog [Kimmig et al., 2011b] replaces probabilistic facts by semiring-labeled facts, with the key di\ufb00erence that it bases the labels of derived facts on the labels of their models rather than those of their derivations, which requires semirings to be commutative. It thus directly generalizes the success probability (5) and the possible world DNF (6). ProbLog inference algorithms based on BDDs or sd-DNNFs can be directly adapted to aProbLog.19 Rather than replacing probabilities with semiring labels, one can also combine them with utilities or costs, and use the resulting language for decision making under uncertainty, as done in DTProbLog [Van den Broeck et al., 2010].20 5 Knowledge-Based Model Construction So far, we have focused on probabilistic logic languages with strong roots in logic, where the key concepts of logic and probability are uni\ufb01ed, that is, a random variable corresponds to a ground fact (or sometimes a ground term, as in distributional clauses), and standard logic programs are used to specify knowledge that can be derived from these facts. In this section, we discuss a second important group of probabilistic logic languages with strong roots in probabilistic graphical models, such as Bayesian or Markov networks. These formalisms typically use logic as a templating language for graphical models in relational domains, and thus take a quite di\ufb00erent approach to combine logic and probabilities, also known as knowledge-based model construction (KBMC). Important representatives of this stream of research include PLPs [Haddawy, 1994], PRMs [Getoor et al., 2007], BLPs [Kersting and De Raedt, 2008], LBNs [Fierens et al., 2005], RBNs [Jaeger, 1997], CLP(BN) [Santos Costa et al., 2008], chain logic [Hommersom et al., 2009], Markov Logic [Richardson and Domingos, 2006], and PSL [Broecheler et al., 2010]. In the following, we relate the key concepts underlying the knowledge-based model construction approach to those discussed in the rest of this paper. We again focus on languages based on logic programming, such as PLPs, BLPs, LBNs, chain logic, and CLP(BN), but mostly abstract from the speci\ufb01c lan18Implementation available at http://dyna.org/ 19A prototype implementation of aProbLog is included in ProbLog1, cf. Footnote 5. 20An implementation of DTProbLog is included in ProbLog1 and ProbLog2, cf. Footnotes 5 and 6. 26 \fguage. These representation languages are typically designed so that implication in logic (\u201d:\u2212\u201d) corresponds to the direct in\ufb02uence relation in Bayesian networks. The logical knowledge base is then used to construct a Bayesian network. So inference proceeds in two steps: the logical step, in which one constructs the network, and the probabilistic step, in which one performs probabilistic inference on the resulting network. We \ufb01rst discuss modeling Bayesian networks and their relational counterpart in the context of the distribution semantics, and then focus on CLP(BN) as an example of a KBMC approach whose primitives clearly expose the separation between model construction via logic programming and probabilistic inference on the propositional model. 5.1 Bayesian Networks and Conditional Probability Tables A Bayesian network (BN) de\ufb01nes a joint probability distribution over a set of random variables V = {V1, . . . , Vm} by factoring it into a product of conditional probability distributions, one for each variable Vi given its parents par(Vi) \u2286V. The parent relation is given by an acyclic directed graph (cf. Figure 4), where the random variables are the nodes and an edge Vi \u2192Vj indicates that Vi is a parent of Vj. The conditional probability distributions are typically speci\ufb01ed as conditional probability tables (CPTs), which form the key probabilistic concept of BNs. For instance, the CPT on the left of Figure 4 speci\ufb01es that the random variable sprinkler takes value true with probability 0.1 (and false with 0.9) if its parent cloudy is true, and with probability 0.5 if cloudy is false. Formally, a CPT contains a row for each possible assignment x1, . . . , xn to the parent variables X1, . . . , Xn specifying the distribution P(X|x1, . . . , xn). As has been shown earlier, e.g., by Poole [1993] and Vennekens et al. [2004], any Bayesian network can be modeled in languages based on the distribution semantics by representing every row in a CPT as an annotated disjunction p1 :: X(w1); \u00b7 \u00b7 \u00b7 ; pk :: X(wk) :\u2212X1(v1), \u00b7 \u00b7 \u00b7 , Xn(vn) where X(v) is true when v is the value of X. The body of this AD is true if the parent nodes have the values speci\ufb01ed in the corresponding row of the CPT, in which case the AD chooses a value for the child from the corresponding distribution. As an example, consider the sprinkler network shown in Figure 4. The CPT for the root node cloudy corresponds to an AD with empty body 0.5 :: cloudy(t); 0.5 :: cloudy(f). whereas the CPTs for sprinkler and rain require the state of their parent node cloudy to be present in the body of the ADs 0.1 :: sprinkler(t); 0.9 :: sprinkler(f) :\u2212cloudy(t). 0.5 :: sprinkler(t); 0.5 :: sprinkler(f) :\u2212cloudy(f). 0.8 :: rain(t); 0.2 :: rain(f) :\u2212cloudy(t). 0.2 :: rain(t); 0.8 :: rain(f) :\u2212cloudy(f). The translation for the CPT of grass wet is analogous. 27 \fC P(s) t 0.10 f 0.50 grass_wet cloudy sprinkler rain P(c)=0.50 C P(r) t 0.80 f 0.20 S R P(g w) t t 0.99 t f 0.90 f t 0.90 f f 0.00 Figure 4: The sprinkler network is a Bayesian network modeling an environment where both the sprinkler and the rain can cause the grass getting wet [Russell and Norvig, 2003]. 5.2 Relational Dependencies Statistical relational learning formalisms such as BLPs, PLPs, LBNs and CLP(BN) essentially replace the speci\ufb01c random variables in the CPTs of Bayesian networks by logically de\ufb01ned random variable templates, commonly referred to as parameterized random variables or par-RVs for short [Poole, 2003], though the actual syntax amongst these systems di\ufb00ers signi\ufb01cantly. We here use annotated disjunctions to illustrate the key idea. For instance, in a propositional setting, the following annotated disjunctions express that a speci\ufb01c student\u2019s grade in a speci\ufb01c course probabilistically depends on whether he has read the corresponding textbook or not: 0.6 :: grade(high); 0.4 :: grade(low) :\u2212reads(true). 0.1 :: grade(high); 0.9 :: grade(low) :\u2212reads(false). Using logical variables, this dependency can directly be expressed for many students, courses, and books: 0.6 :: grade(S, C, high); 0.4 :: grade(S, C, low) :\u2212book(C, B), reads(S, B). 0.1 :: grade(S, C, high); 0.9 :: grade(S, C, low) :\u2212book(C, B), not(reads(S, B)). More concretely, the annotated disjunctions express that P(grade(S, C) = high) = 0.6 if the student has read the book of the course and P(grade(S, C) = high) = 0.1 otherwise. Thus the predicate grade depends on book/2 and reads/2. The dependency holds for all instantiations of the rule, that is, it acts as a template for all persons, courses, and books. This is what knowledgebased model construction approaches all share: the logic acts as a template to generate dependencies (here CPTs) in the graphical model. This also introduces a complication that is not encountered in propositional Bayesian networks or their translation to annotated disjunctions. To illustrate this, let us assume the predicate book/2 is deterministic and known. Then the propositional case arises when for each course there is exactly one book. The annotated disjunctions then e\ufb00ectively encode the conditional probability table P(Grade|Reads). However, 28 \fif there are multiple books, say two, for one course, then the above template would specify two CPTs: one for the \ufb01rst book, P(Grade|Reads1), and one for the second, P(Grade|Reads2). In Bayesian networks, these CPTs need to be combined and there are essentially two ways for realizing this. The \ufb01rst is to use a so-called combining rule, that is, a function that maps these CPTs into a single CPT of the form P(Grade|Reads1, Reads2). The most popular combining rule is noisy-or, for which P(Grade = high|Reads1, ..., Readsn) = 1 \u2212Qn i=1(1 \u2212P(Grade = high|Readsi = true)) where n is the number of books for the course. Using annotated disjunctions, this combining rule is obtained automatically, cf. Section 2.2. In the statistical relational learning literature, this approach is followed for instance in RBNs and BLPs, and several other combining rules exist, cf., e.g., [Jaeger, 1997, Kersting and De Raedt, 2008, Natarajan et al., 2005]. While combining rules are an important concept in KBMC, using them in their general form under the distribution semantics requires one to change the underlying logic, which is non-trivial. Hommersom and Lucas [2011] introduce an approach that models these interactions by combining the distribution semantics with default logic. Alternatively, one could use meta-calls, cf. Section 4.8. The second way of dealing with the two distributions uses aggregation. In this way, the random variable upon which one conditions grade is the number of books the person read, rather than the reading of the individual books. This approach is taken for instance in PRMs and CLP(BN). In the context of the distribution semantics, aggregation can be realized within the logic program using second order predicates, cf. Section 4.7. For instance, the following program makes a distinction between reading more than two, two, one, or none of the books: 0.9 :: grade(S, C, high); 0.1 :: grade(S, C, low) :\u2212nofbooksread(S, C, N), N > 2. 0.8 :: grade(S, C, high); 0.2 :: grade(S, C, low) :\u2212nofbooksread(S, C, 2). 0.6 :: grade(S, C, high); 0.4 :: grade(S, C, low) :\u2212nofbooksread(S, C, 1). 0.1 :: grade(S, C, high); 0.9 :: grade(S, C, low) :\u2212nofbooksread(S, C, 0). nofbooksread(S, C, N) :\u2212findall(B, (book(C, B), reads(S, B)), List), length(List, N). 5.3 Example: CLP(BN) We now discuss CLP(BN) [Santos Costa et al., 2008] in more detail, as it clearly exposes the separation between model construction via logic programming and probabilistic inference on the propositional model in KBMC. CLP(BN) is embedded in Prolog21 and uses constraint programming principles to construct a Bayesian network via logical inference. Syntactically, CLP(BN) extends logic programming with constraint atoms that (a) de\ufb01ne random variables together with their CPTs and (b) establish constraints linking these random variables to logical variables used in the logic program. The answer to a query in CLP(BN) is the marginal distribution of the query variables, conditioned on evidence if available. The \ufb01rst phase of inference in CLP(BN) uses backward reasoning in the logic program to collect all relevant constraints in a constraint store, the 21implementation included in YAP Prolog, http://www.dcc.fc.up.pt/~vsc/Yap/ 29 \fsecond phase computes the marginals in the Bayesian network de\ufb01ned by these constraints. Conditioning on evidence is straightforward, as it only requires to add the corresponding constraints to the store.22 Speci\ufb01cally, a CLP(BN) clause (in canonical form) is either a standard Prolog clause, or has the following structure: h(A1, . . . , An, V) :\u2212body, {V = sk(C1, . . . , Ct) with CPT}. Here, body is a possibly empty conjunction of logical atoms, and the part in curly braces is a constraint atom. sk(C1, . . . , Ct) is a Skolem term not occurring in any other clause of the program (whose arguments Ci are given via the input variables Aj and the logical body), and CPT is a term of the form p(Values, Table, Parents), where Values is a list of possible values for sk(C1, . . . , Ct), Parents is a list of logical variables specifying the parent nodes, and Table the probability table given as a list of probabilities, where the order of entries corresponds to the valuations obtained by backtracking over the parents\u2019 values in the order given in the corresponding de\ufb01nitions. This CPT term can be given either directly or via the use of logical variables and uni\ufb01cation. We \ufb01rst illustrate this for the propositional case, using the following model23 of the sprinkler Bayesian network as given in Figure 4:24 cloudy(C) :{ C = cloudy with p([f,t],[0.5,0.5],[]) }. sprinkler(S) :cloudy(C), % C = f , t { S = sprinkler with p([f,t], [0.5,0.9, % S = f 0.5,0.1], % S = t [C]) }. rain(R) :cloudy(C), % C = f , t { R = rain with p([f,t], [0.8,0.2, % R = f 0.2,0.8], % R = t [C]) }. wet_grass(W) :sprinkler(S), rain(R), { W = wet with p([f,t], /* S/R = f/f, f/t, t/f, t/t */ [1.0, 0.1, 0.1, 0.01, % W = f 0.0, 0.9, 0.9, 0.99], % W = t [S,R]) }. 22The implementation adds evidence declared in the input program to the store at compile time. 23taken from the examples in the CLP(BN) system 24We include comments for better readability, as CLP(BN) swaps rows and columns of CPTs compared to the notation in Figure 4. 30 \fIn the clause for the top node cloudy, the body consists of a single constraint atom that constrains the logical variable C to the value of the random variable cloudy. This random variable takes values f or t with probability 0.5 each, and has an empty parent list. Note that within constraint atoms, the = sign does not denote Prolog uni\ufb01cation, but an equality constraint between a logical variable and the value of a random variable. The clause for sprinkler calls cloudy(C), thus setting up a constraint between C and the cloudy random variable, and then uses C as the only parent of the random variable sprinkler it de\ufb01nes. The \ufb01rst column of the CPT corresponds to the \ufb01rst parent value, the \ufb01rst row to the \ufb01rst child value, and so on, i.e., in case of cloudy=f, the probability of sprinkler=f is 0.5, whereas for cloudy=t, it is 0.9. The other two random variables are de\ufb01ned analogously, with their clauses again \ufb01rst calling the predicates for the parent variables to include the corresponding constraints. To answer the query sprinkler(S), which asks for the marginal of the random variable sprinkler, CLP(BN) performs backward reasoning to \ufb01nd all constraints in the proof of the query, and thus the part of the Bayesian network relevant to compute the marginal. This \ufb01rst calls cloudy(C), adding the constraint C=cloudy to the store (and thus the cloudy node to the BN), and then adds the constraint S=sprinkler to the store, and the sprinkler node with parent cloudy to the BN. When de\ufb01ning relational models, random variables can be parameterized by logical variables as in the following clause from the school example included in the implementation: registration_grade(R, Grade) :registration(R, C, S), course_difficulty(C, Dif), student_intelligence(S, Int), grade_table(Int, Dif, Table), { Grade = grade(R) with Table }. grade_table(I, D, p([a,b,c,d], /* I,D = h h h m h l m h m m m l l h l m l l */ [ 0.2, 0.7, 0.85, 0.1, 0.2, 0.5, 0.01, 0.05,0.1 , %a 0.6, 0.25, 0.12, 0.3, 0.6,0.35,0.04, 0.15, 0.4 , %b 0.15,0.04, 0.02, 0.4,0.15,0.12, 0.5, 0.6, 0.4, %c 0.05,0.01, 0.01, 0.2,0.05,0.03, 0.45, 0.2, 0.1 ],%d [I,D])). Here, registration/3 is a purely logical predicate linking a registration R to a course C and a student S. The predicates course difficulty and student intelligence de\ufb01ne distributions over possible values h(igh), m(edium), and l(ow) for the di\ufb03culty Dif of course C and the intelligence Int of student S, respectively. For each grounding r of the variable R in the database of registrations, this clause de\ufb01nes a random variable grade(r) with values a, b, c and d that depends on the di\ufb03culty of the corresponding course and the intelligence of the corresponding student. In this case, the CPT itself is not de\ufb01ned within the constraint atom, but provided by a Prolog predicate binding it to a logical variable. De\ufb01ning aggregation using second order predicates is straightforward in CLP(BN), as random variables and constraints are part of the object level 31 \fvocabulary. For instance, the following clause de\ufb01nes the performance level of a student based on the average of his grades: student_level(S,L) :findall(G,(registration(R,_,S),registration_grade(R,G)),Grades), avg_grade(Grades,Avg), level_table(T), { L = level(S) with p([h,m,l],T,[Avg])}. Here, avg_grade/2 sets up a new random variable whose value is the suitably de\ufb01ned average of the grade list Grades (and which thus has a deterministic CPT) and constrains Avg to that random variable, and level_table provides the list of CPT entries specifying how the level depends on this average. We refer to Santos Costa et al. [2008] for a discussion of the inference challenges aggregates raise. Despite the di\ufb00erences in syntax, probabilistic primitives, and inference between CLP(BN) and probabilistic extensions of Prolog following the distribution semantics, there are also many commonalities between those. As we discussed above, conditional probability tables can be represented using annotated disjunctions, and it is thus possible to transform CLP(BN) clauses into Prolog programs using annotated disjunctions. On the other hand, Santos Costa and Paes [2009] discuss the relation between PRISM and CLP(BN) based on a number of PRISM programs that they map into CLP(BN) programs. 6 Probabilistic Programming Concepts and Inference We round o\ufb00this survey by summarizing the relations between the dimensions of SUCC inference as discussed in Section 3 and the probabilistic programming concepts identi\ufb01ed in Section 4. On the probabilistic side, we focus on exact inference versus sampling, as conclusions for exact inference carry over to approximate inference with bounds in most cases. On the logical side, we focus on forward versus backward reasoning, as conclusions for backward reasoning carry over to the approach using weighted model counting. We provide an overview in Table 2, where we omit the concepts unknown objects, as those are typically simulated via \ufb02exible probabilities and/or continuous distributions, and constraints, as those have not yet been considered during inference. For generalized labels, we focus on aProbLog, as it is closer to the distribution semantics than Dyna, due to its semantics based on worlds rather than derivations. We do not include MCMC here, as existing MCMC approaches in the context of the distribution semantics are limited to the basic case of de\ufb01nite clause programs without additional concepts. Dimensions of inference: The main di\ufb00erence between exact inference and sampling is that the former has to consider all possible worlds or all proofs of the query, whereas the latter always considers one possible world or proof in isolation. As second order predicates and time and dynamics can increase the number of proofs exponentially (in the length of the answer list or the number of time steps), they are more easily handled by sampling based approaches, though 32 \fFlexible Probabilities Continuous Distributions Stochastic Memoization Negation as Failure 2nd Order Predicates Meta-Calls Time and Dynamics Generalized Labels (aProbLog) forward exact no no with yes yes\u2217 no yes\u2217 yes backward exact yes limitedwith or without yes yes\u2217 yes yes\u2217 yes forward sampling no yes with yes yes no yes n.a. backward sampling yes yes with or without yes yes yes yes n.a. Table 2: Relation between key probabilistic programming concepts and main dimensions of inference; see Section 6 for details. (\u2217number of proofs/worlds exponential in length of answer list or time sequence) tabling can signi\ufb01cantly improve performance of exact inference in dynamic domains. Sampling based approaches do not directly apply for generalized labels, as sampling exploits the probabilistic semantics of fact labels. The main di\ufb00erence between forward and backward reasoning is that the former generates all consequences of the probabilistic logic program, whereas the latter is query-driven and only considers relevant consequences, which can drastically improve e\ufb03ciency. This di\ufb00erence is well-known in logic programming, and becomes even more important in the probabilistic setting, where we are interested in not just a single world or proof, but in all possible worlds or all proofs. The fact that backward reasoning is query-driven makes it well-suited for \ufb02exible probabilities and meta-calls, which cannot directly be handled in forward reasoning. The reason is that the corresponding subgoals have an in\ufb01nite number of groundings, among which backward reasoning easily picks the relevant ones, which forward reasoning cannot do. The same e\ufb00ect makes it necessary to use stochastic memoization in forward reasoning, while backward reasoning can support dememoization (as in PRISM) as well as memoization (as in the various ICL, ProbLog and LPAD systems). The roots of the distribution semantics in logic programming become apparent when considering inference for the two remaining key concepts, negation as failure and continuous distributions as provided by distributional clauses. While the logic concept of negation as failure is naturally supported in all combinations of exact inference or sampling and forward or backward reasoning, the probabilistic concept of continuous distributions is much more challenging, and only practical in sampling-based approaches. 33 \fInference approaches: More speci\ufb01cally, exact inference using forward reasoning in the form discussed in Section 3.1 can be used for all programs with \ufb01nitely many \ufb01nite worlds, which (a) excludes the use of non-ground facts without explicitly given domains, \ufb02exible probabilities, meta-calls and continuous probabilities, and (b) requires stochastic memoization. As this approach additionally su\ufb00ers from having to enumerate all possible worlds, it is not used in practice.25 Exact inference using backward reasoning is the most widely supported inference technique in probabilistic logic programming, provided by AILog2, PRISM, ProbLog1, cplint, PITA and MetaProbLog. PRISM never uses stochastic memoization, whereas the other systems always use it. Only very limited forms of continuous distributions can be supported, cf. the work on Hybrid ProbLog [Gutmann et al., 2010]. All other concepts can be supported, but implementations di\ufb00er in the ones they cover. Negation as failure is supported in all implementations. In addition, AILog2 and cplint support \ufb02exible probabilities, MetaProbLog supports \ufb02exible probabilities and meta-calls, and ProbLog1 supports \ufb02exible probabilities, limited use of continuous distributions (Hybrid ProbLog) and generalized labels (aProbLog). Approximate inference with bounds using backward reasoning is available in ProbLog1 and cplint, but restricted to de\ufb01nite clause programs, as the use of negation as failure complicates proof \ufb01nding (as discussed in Section 4.6). As the WMC approach as implemented in ProbLog2 uses backward inference to determine the relevant grounding, that is, the groundings of clauses that appear in some proof of a query, the same observations as for exact backward inference apply in this case as well. ProbLog2 supports \ufb02exible probabilities and negation as failure. Forward sampling in its simplest form as discussed in Section 3.1 can be used with programs whose worlds are all \ufb01nite, which excludes the use of non-ground facts without explicitly given domains, \ufb02exible probabilities, and meta-calls, and requires stochastic memoization. In contrast to exact forward inference, forward sampling does support continuous distributions, as only one value is considered at a time. None of the probabilistic logic programming systems discussed here implement forward sampling. Backward sampling is the most \ufb02exible approach and can in principle deal with all concepts except generalized labels. Backward sampling approaches are provided by ProbLog1 and cplint, which both support \ufb02exible probabilities and negation as failure. PRISM has a builtin for sampling the outcome of a query using backward reasoning, but does not use it for probability estimation. 7" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file