{ "url": "http://arxiv.org/abs/2404.16461v2", "title": "Large Language Models Perform on Par with Experts Identifying Mental Health Factors in Adolescent Online Forums", "abstract": "Mental health in children and adolescents has been steadily deteriorating\nover the past few years. The recent advent of Large Language Models (LLMs)\noffers much hope for cost and time efficient scaling of monitoring and\nintervention, yet despite specifically prevalent issues such as school bullying\nand eating disorders, previous studies on have not investigated performance in\nthis domain or for open information extraction where the set of answers is not\npredetermined. We create a new dataset of Reddit posts from adolescents aged\n12-19 annotated by expert psychiatrists for the following categories: TRAUMA,\nPRECARITY, CONDITION, SYMPTOMS, SUICIDALITY and TREATMENT and compare expert\nlabels to annotations from two top performing LLMs (GPT3.5 and GPT4). In\naddition, we create two synthetic datasets to assess whether LLMs perform\nbetter when annotating data as they generate it. We find GPT4 to be on par with\nhuman inter-annotator agreement and performance on synthetic data to be\nsubstantially higher, however we find the model still occasionally errs on\nissues of negation and factuality and higher performance on synthetic data is\ndriven by greater complexity of real data rather than inherent advantage.", "authors": "Isabelle Lorge, Dan W. Joyce, Andrey Kormilitzin", "published": "2024-04-25", "updated": "2024-04-26", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "label": "Original Paper", "paper_cat": "LLM Fairness", "gt": "The recent development of powerful Large Language Models such as GPT3.5 [2] and GPT4 [3] able to perform tasks in a zero-shot manner (i.e., without having been specifically trained or fine- tuned to do so) by being simply prompted with natural language instructions shows much promise for healthcare applications and the domain of mental health. Indeed, these models display more impressive general natural language processing abilities than their predecessors and excel at tasks such as Question Answering and Named Entity Recognition [4, 5, 6, 7]. Models with the ability to process social media content for indicators of mental health issues have the potential to become invaluable cost-effective tools for applications such as public health monitoring [8] and online moderation or intervention systems [9]. In addition, synthetic data produced by LLMs can be a cost effective and privacy-preserving tool for training task specific models [10]. There have been several studies aimed at assessing the abilities of LLMs to perform a range of tasks related to mental health on datasets derived from social media. Yang et al. [11] conducted a comprehensive assessment of ChatGPT (gpt-3.5-turbo), InstructGPT3 and LlaMA7B and 13B [12] arXiv:2404.16461v2 [cs.CL] 26 Apr 2024 on 11 different datasets and 5 tasks (mental health condition binary/multiclass detection, cause/factor detection, emotion detection and causal emotion entailment, i.e. determining the cause of a described emotion). They find that while the LLMs perform well (0.46-0.86 F1 depending on task), with ChatGPT substantially outperforming both LLaMA 7B and 13B, they still underperform smaller models specifically fine-tuned for each task (e.g., RoBERTa). Xu et al. [13] find similar results for Alpaca [14], FLAN-T5 [15] and LLaMA2 [16], with only fine-tuned LLMs able to perform on par with smaller, task-specific models such as RoBERTa [17, 18]. However, we find that previous studies suffer from the following shortcomings: 1. They focus on adult mental health 2. They focus on tasks with a closed (or finite) set of answers, where the model is asked to perform each task in turn 3. They do not investigate how LLMs perform on synthetic data, i.e., text they are asked to simultaneously generate and label There is growing consensus that we are facing a child mental health crisis [1]. Before the COVID-19 pandemic there was already increasing incidence of mental health conditions in children and young people (CYP), such as depression, anxiety and eating disorders [19] as well as rising rates of self-harm and suicidal ideation [20] and cyberbullying strongly linked to adverse mental health outcomes [21]. The advent of the pandemic accelerated this already precarious situation and created additional challenges [22, 23] such as discontinuity of healthcare service provision in addition to interruption to young people\u2019s usual engagement in education and their social lives. This age range is particularly vulnerable to onset of mental health issues, with half of conditions appearing by early adolescence and 10-20% of children and young people experiencing at least one mental health condition [24]. Females, those with low socioeconomic backgrounds, trauma, abuse or having witnessed violence [25] are at heightened risk. On the other hand, social media now forms an important part of children and adolescents\u2019 daily lives, whose impact on mental health is debated, with potential benefits (stress reduction and support networks [26]) as well as potential risks (sleep disturbance, self esteem issues and cyberbullying [27]). Regardless of their detrimental or protective impact, social media may contribute valuable insights into CYP\u2019s mental health, with opportunities for monitoring and intervention, for example identifying those at risk of depression and mood disorders [28]. Given the mental health of CYP is a particularly pressing public health concern, we wished to investigate how LLMs perform on extracting mental health factors when faced with social media content generated by young people aged 12-19. Indeed, several issues related to mental health either exclusively apply to children and adolescents (such as school bullying and ongoing family abuse) or are particularly prevalent in this age range (such as eating disorders [29] and self-harm [30]), making both content type and factors of interest distinct from those found in adult social media posts. In addition, previous studies focused on tasks which had either a binary or closed sets of answers (e.g., choosing between several given conditions or between several given causal factors). In contrast, we wish to examine how LLMs perform on a task of open information extraction, where they are given categories of information and asked to extract any which are found in the text (e.g., asked to detect whether there is any mental health condition indicated in the text). Furthermore, in previous studies the models were tested with each task in turn (e.g., asked to detect depression in one dataset, then detect suicidality in another dataset), whereas we gather and annotate our own dataset in order to be able to ask the LLMs to extract all categories simultaneously (e.g, extract all conditions and symptoms in a given sentence). Finally, to our knowledge there has been no investigation on how LLM performance compares when asked to annotate text as they generate it, i.e., how their performance on synthetic data compares with their performance on real data. There is growing interest in synthetic data for healthcare [31]. Given the potential for training models and running simulations and digital twin experiments with the benefit of reduced issues of data scarcity and privacy, we believe that our work will contribute to better understanding of limitations and benefits of using synthetic data for real-world tasks. 2", "main_content": "In summary, we aim to: 1. Generate and annotate with high-quality expert annotations a novel dataset of social media posts which allows extraction of a wide range of mental health factors simultaneously. 2. Investigate performance of two top-performing LLMs (GPT3.5 and GPT4) on extracting mental health factors in adolescent social media posts to verify whether they can be on par with expert annotators. 3. Investigate how these LLMs perform on synthetic data, i.e., when asked to annotate text as they generate it, with the aim of assessing utility of these data in training task specific models 3 Method 3.1 Reddit dataset We use Python\u2019s PRAW library to collect post from the Reddit website (www.reddit.com) over the last year, including posts from specific forum subthemes (\u2018subreddits\u2019) dedicated to mental health topics: r/anxiety, r/depression, r/mentalhealth, r/bipolarreddit, r/bipolar, r/BPD, r/schizophrenia, r/PTSD, r/autism, r/trau-matoolbox, r/socialanxiety, r/dbtselfhelp, r/offmychest and r/mmfb. The distribution of subreddits in the dataset can be found in Figure 1. As in previous works [32], we use heuristics to obtain posts from our target age range (e.g, posts containing expression such as I am 16/just turned 16/etc.) We gather 1000 posts written by 950 unique users. To optimise the annotation process, we select the most relevant sentences to be annotated by embedding a set of mental health keywords with Python\u2019s sentence-transformers library [33] calculating the cosine similarity with post sentences, choosing a threshold of 0.2 cosine similarity after trial and error. We keep the post index for each sentence to provide context. The resulting dataset contains 6500 sentences. 3.2 Ethical considerations In conducting this research, we recognised the importance of respecting the autonomy and privacy of the Reddit users whose posts were included in our dataset. While Reddit data is publicly available and was obtained from open online forums, we acknowledge that users may not have anticipated their contributions being used for research purposes and will therefore make the data available only on demand. The verbatim example sentences given in later sections have been modified to prevent full-text searching strategies to infer the post author\u2019s immediate identity on reddit. To protect the confidentiality of participants, we did not provide usernames or other identifying information to our annotators. Annotators were psychiatrists who were warned that the content of the posts was highly sensitive with potentially triggering topics such as self-harm and child abuse. Reddit\u2019s data sharing and research policy allows academic researchers to access certain Reddit data for the purposes of research, subject to the platform\u2019s terms and conditions. They require researchers to obtain approval through their data access request process before using the API. The policy outlines requirements around protecting user privacy, obtaining consent, and properly attributing the data source in any published work. They reserve the right to deny data access requests or revoke access if the research is deemed to violate Reddit\u2019s policies. Researchers must also agree to Reddit\u2019s standard data use agreement when accessing the data. Our research aims to contribute to the understanding of mental health discourse from adolescents on social media platforms. We believe the potential benefits of this work, in terms of insights that could improve mental health support and resources, outweigh the minimal risks to participants. However, we remain aware of the ethical complexities involved in using public social media data, and encourage further discussion and guidance in this emerging area of study. 3 3.3 Synthetic dataset In addition to the real dataset, we generate two synthetic datasets of 500 sentences each by prompting GPT3.5 (gpt-3.5-turbo-0125) and GPT4 (gpt-4-0125-preview) to create and label Reddit-like posts of 5 sentences (temperature 0, all other parameters set to default). The instructions given were made as similar as possible to those given to annotators, and the model was expliclity told to only label factors which applied to the author of the post (e.g., not to label My friend has depression with CONDITION). The prompt used can be found in Appendix A. Figure 1: Distribution of subreddits 3.4 Annotation schema Given our goal is to obtain a wide range of relevant annotations for each sentence in order to test the LLMs\u2019 ability to generalise and perform open information extraction, and the previously mentioned important factors related to trauma [34] and precarity [35], we create the following six categories in consultation with a clinical psychiatrist: \u2022 TRAUMA (sexual abuse, physical abuse, emotional abuse, school bullying, death, accident, etc.) \u2022 PRECARITY (socioeconomic, parental conflict, parental illness, etc.) \u2022 SYMPTOM (self-harm, low self-esteem, anhedonia, panic attack, flashback, psychosis, insomnia, etc.) 4 \u2022 CONDITION (eating disorder, depression, bipolar, bpd, anxiety, ptsd, adhd, substance abuse/addiction, etc.) \u2022 SUICIDALITY (no subcategories) \u2022 TREATMENT (no subcategories) Nineteen expert annotators were contacted and asked to annotate 500 sentences each for a fixed compensation of \u00a3120 (\u2248\u00a360/hour). These were UK-trained psychiatrists, all of whom had obtained Membership of the Royal College of Psychiatrists by post-graduate experience and formal examinations. Thirteen annotators annotated the Reddit dataset, two annotators annotated the synthetic datasets and four annotators re-annotated samples from the Reddit and synthetic datasets for inter-annotator agreement computation (100 sentences from each dataset, 1500 sentences in total). Annotators were given the above subcategory examples but allowed to use new subcategories when appropriate (no closed set of answers). They were given the post indices to provide context (i.e., so as to be aware which sentences belonged to the same post). They were asked to annotate only school bullying as bullying, and other instances (e.g., sibling harassment) as emotional abuse. Anxiety was to be annotated as a symptom rather than condition unless specifically described as a disorder. Experts performed the annotation by filling in the relevant columns in an Excel sheet with each sentence as a row. Importantly, given the known limitations of language models with negation [36], we wished to annotate both POSITIVE and NEGATIVE evidence in order to test LLMs\u2019 ability to handle both polarities (e.g., I am not feeling suicidal as negative suicidality or We don\u2019t have any money issues as negative socioeconomic precarity). For this purpose, annotators were asked to use the prefixes P and N (e.g., P(adhd) in the CONDITION column or N(socioeconomic) in the PRECARITY column). 3.5 Data processing and dataset statistics In order to compare expert annotations with LLM annotations despite the wide variety of subcategories and terms used by annotators we create dictionaries mapping each term found in the dataset to a standard equivalent (e.g., p(emotional) to p(emotional abuse), p(physical violence) to p(physical abuse), p(gun violence) and p(school shooting) to p(violence), p(rape) to p(sexual abuse), p(financial burden) and p(poor) to p(socioeconomic precarity), p(divorce) to p(family conflict), p(self hatred) to p(low self esteem), etc.). Parental substance abuse is considered family illness and any underspecified subcategories are marked as \u2018unspecified\u2019 (e.g., p(trauma unspecified)). The distribution of subcategories for each category can be found in figures 2, 3, 4 and 5 in Appendix B. The most frequent subcategory in TRAUMA is emotional abuse, which occurs twice as often as physical abuse and death in the dataset. The most frequent form of PRECARITY is family conflict, then family illness (including parental substance abuse) and socioeconomic precarity. The most frequent CONDITIONS are depressive disorders, followed by substance abuse/addiction and ADHD. The most frequent SYMPTOMS are anxiety, low self-esteem, self-harm and low mood. Interestingly, the distribution of subcategories differs quite substantially in the synthetic datasets (distributions for the GPT3.5 and GPT4 generated datasets can be found in Appendix B). Overall, the number of subcategories is reduced, indicating less diversity (however, these are smaller datasets). The top trauma subcategories are sexual abuse for GPT3.5 and school bullying for GPT4, both of which were much less prevalent in real data. The second most prevalent condition for both GPT3.5 and GPT4 is eating disorders, whereas these ranked in 8th place in real data. Finally, unlike in real data, flashbacks and panic attacks are the 3d and 4th most frequent symptoms for both GPT3.5 and GPT4-generated data, whereas self-harm ranks much lower than in real data. Given many of these subcategories were given as examples in the annotator guidelines and LLM prompt, it is likely that the LLMs used them in a more homogenous manner for generation than the distribution which would be found in real data. However, the distribution is not entirely homogenous, which suggests the LLMs did leverage some of the biases learned from their training data. 4 Results Once both human and LLM annotations are standardised, we conduct analyses to assess performance. We provide precision, recall and F1 at the category level and accuracy at the subcategory level 5 collapsed across subcategories (given their high number). We compute category performance in two ways: Positive or Negative, where a point is awarded if the category contains an annotation in both human and LLM annotations, regardless of polarity (i.e., the annotator considered there was relevant information concerning the category TRAUMA) and Positive Only metrics, where negative annotations are counted as no annotations. The difference between the two metrics can be seen clearly in Table 1 (GPT3.5 results), where precision increases but recall diminishes for Positive Only. The increase in precision is due to the fact that GPT3.5 outputs a substantial number of negative annotations in cases where human annotators did not consider it relevant to mention the category. The reduction in recall, on the other hand, results from the fact that LLMs often confuse positive and negative annotations and will occasionally output a negative annotation for a positive one. For real data (Tables 1 and 2), GPT3.5\u2019s performance at the category level is average, with better performance in the Positive Only metrics (0.57). GPT4 performs better, especially in Positive Only metrics (0.63) and subcategory accuracy (0.48 vs. 0.39). In general, recall is higher than precision, indicating LLMs may be overpredicting labels. The performance for synthetic data (Tables 3 and 4) is substantially better, with no gap between the Positive or Negative and Positive Only metrics, suggesting less irrelevant negative annotations. Here again, GPT4 outperforms GPT3.5, both at the category level (0.75 vs 0.70 and 0.73 vs 0.68) and more particularly at the subcategory level, where GPT4 reaches an impressive accuracy of 0.72 (vs 0.42). The gap between recall and precision is reduced for GPT4, whereas GPT3.5 displays higher precision than recall here. In order to assess the upper bound of human performance, we calculate inter-annotator agreement for both real and synthetic datasets using Cohen\u2019s Kappa. Values can be found in Table 5. Interestingly, while performance at the category level in real data is lower (GPT3.5) or similar (GPT4) compared to humans, GPT4 displays a substantially higher accuracy at the subcategory level (0.47 vs 0.35). For synthetic data, GPT3.5 still underperforms human agreement on all three metrics, while GPT4 is on par with humans for the Positive Only and subcategory metrics and only underperforms in the Positive and Negative metric. Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.38 0.78 0.51 0.56 0.65 0.60 0.39 PRECARITY 0.26 0.43 0.33 0.45 0.31 0.37 0.22 CONDITION 0.33 0.85 0.48 0.54 0.72 0.62 0.55 SYMPTOMS 0.39 0.62 0.48 0.46 0.58 0.52 0.31 SUICIDALITY 0.44 0.79 0.56 0.80 0.68 0.73 / TREATMENT 0.48 0.72 0.58 0.72 0.58 0.64 / ALL 0.37 0.70 0.49 0.55 0.60 0.57 0.39 Table 1: GPT3.5 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level 5 Error analysis We examine some of the sentences annotated by the LLMs in order to perform error analysis and extract the following findings (as mentioned previously some words have been paraphrased to preclude full-text search allowing user identification): \u2022 Both GPT3.5 and GPT4 produce infelicitous negations, i.e., negative annotations which would seem irrelevant to humans, e.g., (I have amazing people around me =>negative parental death or The internet is my one only coping mechanism =>trauma unspecified) \u2022 Despite being specifically prompted to only annotate factors related to the writer/speaker, LLMs (including GPT4) do not always comply, e.g., She comes from what is, honestly, a horrific family situation =>emotional abuse) 6 Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.44 0.89 0.59 0.57 0.84 0.68 0.57 PRECARITY 0.31 0.52 0.39 0.50 0.46 0.48 0.36 CONDITION 0.46 0.81 0.59 0.61 0.77 0.68 0.57 SYMPTOMS 0.35 0.78 0.49 0.45 0.73 0.56 0.41 SUICIDALITY 0.36 0.93 0.51 0.70 0.87 0.77 / TREATMENT 0.39 0.87 0.54 0.64 0.81 0.71 / ALL 0.39 0.80 0.52 0.55 0.75 0.63 0.48 Table 2: GPT4 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.90 0.49 0.64 0.90 0.49 0.64 0.38 PRECARITY 0.84 0.69 0.76 0.86 0.69 0.76 0.54 CONDITION 0.44 0.67 0.53 0.47 0.67 0.55 0.59 SYMPTOMS 0.85 0.59 0.70 0.84 0.59 0.69 0.36 SUICIDALITY 0.75 1.00 0.85 0.77 0.90 0.83 / TREATMENT 0.68 0.84 0.75 0.76 0.57 0.65 / ALL 0.74 0.65 0.70 0.77 0.61 0.68 0.42 Table 3: GPT3.5 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.84 0.95 0.89 0.86 0.92 0.89 0.82 PRECARITY 0.85 0.84 0.85 0.91 0.82 0.86 0.80 CONDITION 0.61 0.67 0.64 0.60 0.67 0.63 0.67 SYMPTOMS 0.49 0.78 0.60 0.53 0.80 0.64 0.69 SUICIDALITY 0.81 0.94 0.87 0.78 0.82 0.80 / TREATMENT 0.85 0.89 0.87 0.87 0.78 0.82 / ALL 0.69 0.83 0.75 0.69 0.79 0.73 0.72 Table 4: GPT4 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level \u2022 Even GPT4 makes errors regarding negation (e.g., I\u2019ve read about people with autism getting temper tantrums/meltdowns, however, that has never really been a problem for me=>negative autism or i had in my head that something inside was very wrong, but i never felt completely depressed all the time so i never took bipolar seriously =>negative bipolar disorder) \u2022 Despite being prompted to annotate suicidality in a separate category, LLMs often annotate it in the SYMPTOM rather than SUICIDALITY category \u2022 GPT3.5 especially often outputs irrelevant/spurious/incorrect labels (e.g., \u2018unemployed\u2019 as condition, \u2018ambition\u2019 as symptom, labelling physical conditions instead of mental conditions only, etc.) 7 Positive and Negative Positive Only Subcategory Annotator vs. Annotator (real data) 0.60 0.59 0.35 GPT3 vs. Annotator (real data) 0.39 0.52 0.37 GPT4 vs. Annotator (real data) 0.43 0.58 0.47 Annotator vs. Annotator (synthetic data) 0.77 0.71 0.68 GPT3 vs. Annotator (synthetic data) 0.64 0.63 0.40 GPT4 vs. Annotator (synthetic data) 0.70 0.69 0.71 Table 5: Inter-annotator agreement (Cohen\u2019s Kappa) \u2022 Even GPT4 makes errors regarding factuality (e.g., It was around my second year in junior high school when my father tried to take his life =>positive death) However, in many cases the assessment is not entirely fair, as the LLMs (particularly GPT4) often catch annotations which human annotators missed, or the difference in subcategories is subjective and open to debate (e.g., school bullying vs emotional abuse, emotional abuse vs abuse unspecified, etc.). Thus it is possible that LLMs, or most likely GPT4, in fact outperformed experts on this task. 6 Discussion The results obtained from our comparison of LLM annotations with human annotations on both real and synthetic data allow us to make a few conclusions and recommendations. Overall, both LLMs perform well. Inter-annotator agreement and performance indicate that GPT4 performs on par with human annotators. In fact, error analysis and manual examination of annotations suggest the LLMs potentially outperform human annotators in terms of recall (sensitivity), catching annotations which have been missed. However, while recall might be improved in LLMs versus human annotators, precision may suffer in unexpected ways, for example through errors in the use of negation and factuality, even in the case of GPT4. LLMs display a particular tendency to overpredict labels and produce negative annotations in infelicitous contexts, i.e., when humans would deem them irrelevant, creating an amount of noise. However, these negative annotations are not technically incorrect. While accuracy errors could be found in the LLM output, the experts\u2019 outputs were not entirely free of them, and previous work by [37] suggests LLMs may both be more complete AND more accurate than medical experts. There may still be a difference in the type of accuracy errors produced by LLMs, which will have to be investigated in future research. In terms of accuracy at the subcategory level, we were surprised to find GPT4 outperformed human agreement by a large margin in real data (0.47 vs 0.35). We hypothesise this is due to the fact that human annotators display higher subjectivity in their style of annotation at the subcategory level (given the lack of predetermined subcategories) and diverge more between them. LLMs are likely to be more \u2018standard\u2019 and generic and thus potentially more in agreement with any given human annotator. More specifically, LLMs tend to be consistent from one annotation to the other with higher recall whereas human annotators showed less consistency. Therefore, if a sentence mentions physical, sexual and emotional abuse, annotators might only mention two out of three but when mentioning all three an LLM is more likely to be in agreement than another annotator, i.e., the LLM will catch more of the perfectly recalled annotations than the second annotator. The better performance demonstrated on synthetic data doesn\u2019t seem due to LLMs performing better on data they are generating, but rather to the synthetic data being less complex and diverse and thus easier to annotate for both LLMs and humans, as evidenced by GPT4 reaching similar inter-annotator agreement scores to humans (with agreement both in humans and LLM/human 10% higher for synthetic data). This better performance could still warrant using synthetic data for e.g., training machine learning models (given more reliable labels) but only in cases where the potential loss in diversity is compensated by the increase in label reliability. This will likely depend on the specific application. 8 7 Conclusion We presented the results of a study examining human and Large Language Models (GPT3.5 and GPT4) performance in extracting mental health factors from adolescent social media data. We performed analyses both on real and synthetic data and found GPT4 performance to be on par with human inter-annotator agreement for both datasets, with substantially better performance on the synthetic dataset. However, we find GPT4 still performing non-human errors in negation and factuality, and synthetic data to be much less diverse and differently distributed than real data. The potential for future applications in healthcare will have to be determined by weighing these factors against the substantial reductions in time and cost achieved through the use of LLMs. Acknowledgment I.L., D.W.J., and A.K. are partially supported by the National Institute for Health and Care Research (NIHR) AI Award grant (AI_AWARD02183) which explicitly examines the use of AI technology in mental health care provision. A.K. declare a research grant from GlaxoSmithKline (unrelated to this work). This research project is supported by the NIHR Oxford Health Biomedical Research Centre (grant NIHR203316). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR or the UK Department of Health and Social Care.", "additional_graph_info": { "graph": [ [ "Isabelle Lorge", "Andrey Kormilitzin" ], [ "Isabelle Lorge", "Janet B. Pierrehumbert" ], [ "Andrey Kormilitzin", "Qiang Liu" ], [ "Andrey Kormilitzin", "Hao Ni" ] ], "node_feat": { "Isabelle Lorge": [ { "url": "http://arxiv.org/abs/2404.16461v2", "title": "Large Language Models Perform on Par with Experts Identifying Mental Health Factors in Adolescent Online Forums", "abstract": "Mental health in children and adolescents has been steadily deteriorating\nover the past few years. The recent advent of Large Language Models (LLMs)\noffers much hope for cost and time efficient scaling of monitoring and\nintervention, yet despite specifically prevalent issues such as school bullying\nand eating disorders, previous studies on have not investigated performance in\nthis domain or for open information extraction where the set of answers is not\npredetermined. We create a new dataset of Reddit posts from adolescents aged\n12-19 annotated by expert psychiatrists for the following categories: TRAUMA,\nPRECARITY, CONDITION, SYMPTOMS, SUICIDALITY and TREATMENT and compare expert\nlabels to annotations from two top performing LLMs (GPT3.5 and GPT4). In\naddition, we create two synthetic datasets to assess whether LLMs perform\nbetter when annotating data as they generate it. We find GPT4 to be on par with\nhuman inter-annotator agreement and performance on synthetic data to be\nsubstantially higher, however we find the model still occasionally errs on\nissues of negation and factuality and higher performance on synthetic data is\ndriven by greater complexity of real data rather than inherent advantage.", "authors": "Isabelle Lorge, Dan W. Joyce, Andrey Kormilitzin", "published": "2024-04-25", "updated": "2024-04-26", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "main_content": "In summary, we aim to: 1. Generate and annotate with high-quality expert annotations a novel dataset of social media posts which allows extraction of a wide range of mental health factors simultaneously. 2. Investigate performance of two top-performing LLMs (GPT3.5 and GPT4) on extracting mental health factors in adolescent social media posts to verify whether they can be on par with expert annotators. 3. Investigate how these LLMs perform on synthetic data, i.e., when asked to annotate text as they generate it, with the aim of assessing utility of these data in training task specific models 3 Method 3.1 Reddit dataset We use Python\u2019s PRAW library to collect post from the Reddit website (www.reddit.com) over the last year, including posts from specific forum subthemes (\u2018subreddits\u2019) dedicated to mental health topics: r/anxiety, r/depression, r/mentalhealth, r/bipolarreddit, r/bipolar, r/BPD, r/schizophrenia, r/PTSD, r/autism, r/trau-matoolbox, r/socialanxiety, r/dbtselfhelp, r/offmychest and r/mmfb. The distribution of subreddits in the dataset can be found in Figure 1. As in previous works [32], we use heuristics to obtain posts from our target age range (e.g, posts containing expression such as I am 16/just turned 16/etc.) We gather 1000 posts written by 950 unique users. To optimise the annotation process, we select the most relevant sentences to be annotated by embedding a set of mental health keywords with Python\u2019s sentence-transformers library [33] calculating the cosine similarity with post sentences, choosing a threshold of 0.2 cosine similarity after trial and error. We keep the post index for each sentence to provide context. The resulting dataset contains 6500 sentences. 3.2 Ethical considerations In conducting this research, we recognised the importance of respecting the autonomy and privacy of the Reddit users whose posts were included in our dataset. While Reddit data is publicly available and was obtained from open online forums, we acknowledge that users may not have anticipated their contributions being used for research purposes and will therefore make the data available only on demand. The verbatim example sentences given in later sections have been modified to prevent full-text searching strategies to infer the post author\u2019s immediate identity on reddit. To protect the confidentiality of participants, we did not provide usernames or other identifying information to our annotators. Annotators were psychiatrists who were warned that the content of the posts was highly sensitive with potentially triggering topics such as self-harm and child abuse. Reddit\u2019s data sharing and research policy allows academic researchers to access certain Reddit data for the purposes of research, subject to the platform\u2019s terms and conditions. They require researchers to obtain approval through their data access request process before using the API. The policy outlines requirements around protecting user privacy, obtaining consent, and properly attributing the data source in any published work. They reserve the right to deny data access requests or revoke access if the research is deemed to violate Reddit\u2019s policies. Researchers must also agree to Reddit\u2019s standard data use agreement when accessing the data. Our research aims to contribute to the understanding of mental health discourse from adolescents on social media platforms. We believe the potential benefits of this work, in terms of insights that could improve mental health support and resources, outweigh the minimal risks to participants. However, we remain aware of the ethical complexities involved in using public social media data, and encourage further discussion and guidance in this emerging area of study. 3 3.3 Synthetic dataset In addition to the real dataset, we generate two synthetic datasets of 500 sentences each by prompting GPT3.5 (gpt-3.5-turbo-0125) and GPT4 (gpt-4-0125-preview) to create and label Reddit-like posts of 5 sentences (temperature 0, all other parameters set to default). The instructions given were made as similar as possible to those given to annotators, and the model was expliclity told to only label factors which applied to the author of the post (e.g., not to label My friend has depression with CONDITION). The prompt used can be found in Appendix A. Figure 1: Distribution of subreddits 3.4 Annotation schema Given our goal is to obtain a wide range of relevant annotations for each sentence in order to test the LLMs\u2019 ability to generalise and perform open information extraction, and the previously mentioned important factors related to trauma [34] and precarity [35], we create the following six categories in consultation with a clinical psychiatrist: \u2022 TRAUMA (sexual abuse, physical abuse, emotional abuse, school bullying, death, accident, etc.) \u2022 PRECARITY (socioeconomic, parental conflict, parental illness, etc.) \u2022 SYMPTOM (self-harm, low self-esteem, anhedonia, panic attack, flashback, psychosis, insomnia, etc.) 4 \u2022 CONDITION (eating disorder, depression, bipolar, bpd, anxiety, ptsd, adhd, substance abuse/addiction, etc.) \u2022 SUICIDALITY (no subcategories) \u2022 TREATMENT (no subcategories) Nineteen expert annotators were contacted and asked to annotate 500 sentences each for a fixed compensation of \u00a3120 (\u2248\u00a360/hour). These were UK-trained psychiatrists, all of whom had obtained Membership of the Royal College of Psychiatrists by post-graduate experience and formal examinations. Thirteen annotators annotated the Reddit dataset, two annotators annotated the synthetic datasets and four annotators re-annotated samples from the Reddit and synthetic datasets for inter-annotator agreement computation (100 sentences from each dataset, 1500 sentences in total). Annotators were given the above subcategory examples but allowed to use new subcategories when appropriate (no closed set of answers). They were given the post indices to provide context (i.e., so as to be aware which sentences belonged to the same post). They were asked to annotate only school bullying as bullying, and other instances (e.g., sibling harassment) as emotional abuse. Anxiety was to be annotated as a symptom rather than condition unless specifically described as a disorder. Experts performed the annotation by filling in the relevant columns in an Excel sheet with each sentence as a row. Importantly, given the known limitations of language models with negation [36], we wished to annotate both POSITIVE and NEGATIVE evidence in order to test LLMs\u2019 ability to handle both polarities (e.g., I am not feeling suicidal as negative suicidality or We don\u2019t have any money issues as negative socioeconomic precarity). For this purpose, annotators were asked to use the prefixes P and N (e.g., P(adhd) in the CONDITION column or N(socioeconomic) in the PRECARITY column). 3.5 Data processing and dataset statistics In order to compare expert annotations with LLM annotations despite the wide variety of subcategories and terms used by annotators we create dictionaries mapping each term found in the dataset to a standard equivalent (e.g., p(emotional) to p(emotional abuse), p(physical violence) to p(physical abuse), p(gun violence) and p(school shooting) to p(violence), p(rape) to p(sexual abuse), p(financial burden) and p(poor) to p(socioeconomic precarity), p(divorce) to p(family conflict), p(self hatred) to p(low self esteem), etc.). Parental substance abuse is considered family illness and any underspecified subcategories are marked as \u2018unspecified\u2019 (e.g., p(trauma unspecified)). The distribution of subcategories for each category can be found in figures 2, 3, 4 and 5 in Appendix B. The most frequent subcategory in TRAUMA is emotional abuse, which occurs twice as often as physical abuse and death in the dataset. The most frequent form of PRECARITY is family conflict, then family illness (including parental substance abuse) and socioeconomic precarity. The most frequent CONDITIONS are depressive disorders, followed by substance abuse/addiction and ADHD. The most frequent SYMPTOMS are anxiety, low self-esteem, self-harm and low mood. Interestingly, the distribution of subcategories differs quite substantially in the synthetic datasets (distributions for the GPT3.5 and GPT4 generated datasets can be found in Appendix B). Overall, the number of subcategories is reduced, indicating less diversity (however, these are smaller datasets). The top trauma subcategories are sexual abuse for GPT3.5 and school bullying for GPT4, both of which were much less prevalent in real data. The second most prevalent condition for both GPT3.5 and GPT4 is eating disorders, whereas these ranked in 8th place in real data. Finally, unlike in real data, flashbacks and panic attacks are the 3d and 4th most frequent symptoms for both GPT3.5 and GPT4-generated data, whereas self-harm ranks much lower than in real data. Given many of these subcategories were given as examples in the annotator guidelines and LLM prompt, it is likely that the LLMs used them in a more homogenous manner for generation than the distribution which would be found in real data. However, the distribution is not entirely homogenous, which suggests the LLMs did leverage some of the biases learned from their training data. 4 Results Once both human and LLM annotations are standardised, we conduct analyses to assess performance. We provide precision, recall and F1 at the category level and accuracy at the subcategory level 5 collapsed across subcategories (given their high number). We compute category performance in two ways: Positive or Negative, where a point is awarded if the category contains an annotation in both human and LLM annotations, regardless of polarity (i.e., the annotator considered there was relevant information concerning the category TRAUMA) and Positive Only metrics, where negative annotations are counted as no annotations. The difference between the two metrics can be seen clearly in Table 1 (GPT3.5 results), where precision increases but recall diminishes for Positive Only. The increase in precision is due to the fact that GPT3.5 outputs a substantial number of negative annotations in cases where human annotators did not consider it relevant to mention the category. The reduction in recall, on the other hand, results from the fact that LLMs often confuse positive and negative annotations and will occasionally output a negative annotation for a positive one. For real data (Tables 1 and 2), GPT3.5\u2019s performance at the category level is average, with better performance in the Positive Only metrics (0.57). GPT4 performs better, especially in Positive Only metrics (0.63) and subcategory accuracy (0.48 vs. 0.39). In general, recall is higher than precision, indicating LLMs may be overpredicting labels. The performance for synthetic data (Tables 3 and 4) is substantially better, with no gap between the Positive or Negative and Positive Only metrics, suggesting less irrelevant negative annotations. Here again, GPT4 outperforms GPT3.5, both at the category level (0.75 vs 0.70 and 0.73 vs 0.68) and more particularly at the subcategory level, where GPT4 reaches an impressive accuracy of 0.72 (vs 0.42). The gap between recall and precision is reduced for GPT4, whereas GPT3.5 displays higher precision than recall here. In order to assess the upper bound of human performance, we calculate inter-annotator agreement for both real and synthetic datasets using Cohen\u2019s Kappa. Values can be found in Table 5. Interestingly, while performance at the category level in real data is lower (GPT3.5) or similar (GPT4) compared to humans, GPT4 displays a substantially higher accuracy at the subcategory level (0.47 vs 0.35). For synthetic data, GPT3.5 still underperforms human agreement on all three metrics, while GPT4 is on par with humans for the Positive Only and subcategory metrics and only underperforms in the Positive and Negative metric. Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.38 0.78 0.51 0.56 0.65 0.60 0.39 PRECARITY 0.26 0.43 0.33 0.45 0.31 0.37 0.22 CONDITION 0.33 0.85 0.48 0.54 0.72 0.62 0.55 SYMPTOMS 0.39 0.62 0.48 0.46 0.58 0.52 0.31 SUICIDALITY 0.44 0.79 0.56 0.80 0.68 0.73 / TREATMENT 0.48 0.72 0.58 0.72 0.58 0.64 / ALL 0.37 0.70 0.49 0.55 0.60 0.57 0.39 Table 1: GPT3.5 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level 5 Error analysis We examine some of the sentences annotated by the LLMs in order to perform error analysis and extract the following findings (as mentioned previously some words have been paraphrased to preclude full-text search allowing user identification): \u2022 Both GPT3.5 and GPT4 produce infelicitous negations, i.e., negative annotations which would seem irrelevant to humans, e.g., (I have amazing people around me =>negative parental death or The internet is my one only coping mechanism =>trauma unspecified) \u2022 Despite being specifically prompted to only annotate factors related to the writer/speaker, LLMs (including GPT4) do not always comply, e.g., She comes from what is, honestly, a horrific family situation =>emotional abuse) 6 Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.44 0.89 0.59 0.57 0.84 0.68 0.57 PRECARITY 0.31 0.52 0.39 0.50 0.46 0.48 0.36 CONDITION 0.46 0.81 0.59 0.61 0.77 0.68 0.57 SYMPTOMS 0.35 0.78 0.49 0.45 0.73 0.56 0.41 SUICIDALITY 0.36 0.93 0.51 0.70 0.87 0.77 / TREATMENT 0.39 0.87 0.54 0.64 0.81 0.71 / ALL 0.39 0.80 0.52 0.55 0.75 0.63 0.48 Table 2: GPT4 (real data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.90 0.49 0.64 0.90 0.49 0.64 0.38 PRECARITY 0.84 0.69 0.76 0.86 0.69 0.76 0.54 CONDITION 0.44 0.67 0.53 0.47 0.67 0.55 0.59 SYMPTOMS 0.85 0.59 0.70 0.84 0.59 0.69 0.36 SUICIDALITY 0.75 1.00 0.85 0.77 0.90 0.83 / TREATMENT 0.68 0.84 0.75 0.76 0.57 0.65 / ALL 0.74 0.65 0.70 0.77 0.61 0.68 0.42 Table 3: GPT3.5 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level Category Positive or Negative Positive Only Subcategory Precision Recall F1-Score Precision Recall F1-Score Accuracy TRAUMA 0.84 0.95 0.89 0.86 0.92 0.89 0.82 PRECARITY 0.85 0.84 0.85 0.91 0.82 0.86 0.80 CONDITION 0.61 0.67 0.64 0.60 0.67 0.63 0.67 SYMPTOMS 0.49 0.78 0.60 0.53 0.80 0.64 0.69 SUICIDALITY 0.81 0.94 0.87 0.78 0.82 0.80 / TREATMENT 0.85 0.89 0.87 0.87 0.78 0.82 / ALL 0.69 0.83 0.75 0.69 0.79 0.73 0.72 Table 4: GPT4 (synthetic data). Positive or Negative: counting annotation in category regardless of polarity (category level); Positive Only: counting negative annotations as NaN (category level); Subcategory: accuracy at the subcategory level \u2022 Even GPT4 makes errors regarding negation (e.g., I\u2019ve read about people with autism getting temper tantrums/meltdowns, however, that has never really been a problem for me=>negative autism or i had in my head that something inside was very wrong, but i never felt completely depressed all the time so i never took bipolar seriously =>negative bipolar disorder) \u2022 Despite being prompted to annotate suicidality in a separate category, LLMs often annotate it in the SYMPTOM rather than SUICIDALITY category \u2022 GPT3.5 especially often outputs irrelevant/spurious/incorrect labels (e.g., \u2018unemployed\u2019 as condition, \u2018ambition\u2019 as symptom, labelling physical conditions instead of mental conditions only, etc.) 7 Positive and Negative Positive Only Subcategory Annotator vs. Annotator (real data) 0.60 0.59 0.35 GPT3 vs. Annotator (real data) 0.39 0.52 0.37 GPT4 vs. Annotator (real data) 0.43 0.58 0.47 Annotator vs. Annotator (synthetic data) 0.77 0.71 0.68 GPT3 vs. Annotator (synthetic data) 0.64 0.63 0.40 GPT4 vs. Annotator (synthetic data) 0.70 0.69 0.71 Table 5: Inter-annotator agreement (Cohen\u2019s Kappa) \u2022 Even GPT4 makes errors regarding factuality (e.g., It was around my second year in junior high school when my father tried to take his life =>positive death) However, in many cases the assessment is not entirely fair, as the LLMs (particularly GPT4) often catch annotations which human annotators missed, or the difference in subcategories is subjective and open to debate (e.g., school bullying vs emotional abuse, emotional abuse vs abuse unspecified, etc.). Thus it is possible that LLMs, or most likely GPT4, in fact outperformed experts on this task. 6 Discussion The results obtained from our comparison of LLM annotations with human annotations on both real and synthetic data allow us to make a few conclusions and recommendations. Overall, both LLMs perform well. Inter-annotator agreement and performance indicate that GPT4 performs on par with human annotators. In fact, error analysis and manual examination of annotations suggest the LLMs potentially outperform human annotators in terms of recall (sensitivity), catching annotations which have been missed. However, while recall might be improved in LLMs versus human annotators, precision may suffer in unexpected ways, for example through errors in the use of negation and factuality, even in the case of GPT4. LLMs display a particular tendency to overpredict labels and produce negative annotations in infelicitous contexts, i.e., when humans would deem them irrelevant, creating an amount of noise. However, these negative annotations are not technically incorrect. While accuracy errors could be found in the LLM output, the experts\u2019 outputs were not entirely free of them, and previous work by [37] suggests LLMs may both be more complete AND more accurate than medical experts. There may still be a difference in the type of accuracy errors produced by LLMs, which will have to be investigated in future research. In terms of accuracy at the subcategory level, we were surprised to find GPT4 outperformed human agreement by a large margin in real data (0.47 vs 0.35). We hypothesise this is due to the fact that human annotators display higher subjectivity in their style of annotation at the subcategory level (given the lack of predetermined subcategories) and diverge more between them. LLMs are likely to be more \u2018standard\u2019 and generic and thus potentially more in agreement with any given human annotator. More specifically, LLMs tend to be consistent from one annotation to the other with higher recall whereas human annotators showed less consistency. Therefore, if a sentence mentions physical, sexual and emotional abuse, annotators might only mention two out of three but when mentioning all three an LLM is more likely to be in agreement than another annotator, i.e., the LLM will catch more of the perfectly recalled annotations than the second annotator. The better performance demonstrated on synthetic data doesn\u2019t seem due to LLMs performing better on data they are generating, but rather to the synthetic data being less complex and diverse and thus easier to annotate for both LLMs and humans, as evidenced by GPT4 reaching similar inter-annotator agreement scores to humans (with agreement both in humans and LLM/human 10% higher for synthetic data). This better performance could still warrant using synthetic data for e.g., training machine learning models (given more reliable labels) but only in cases where the potential loss in diversity is compensated by the increase in label reliability. This will likely depend on the specific application. 8 7 Conclusion We presented the results of a study examining human and Large Language Models (GPT3.5 and GPT4) performance in extracting mental health factors from adolescent social media data. We performed analyses both on real and synthetic data and found GPT4 performance to be on par with human inter-annotator agreement for both datasets, with substantially better performance on the synthetic dataset. However, we find GPT4 still performing non-human errors in negation and factuality, and synthetic data to be much less diverse and differently distributed than real data. The potential for future applications in healthcare will have to be determined by weighing these factors against the substantial reductions in time and cost achieved through the use of LLMs. Acknowledgment I.L., D.W.J., and A.K. are partially supported by the National Institute for Health and Care Research (NIHR) AI Award grant (AI_AWARD02183) which explicitly examines the use of AI technology in mental health care provision. A.K. declare a research grant from GlaxoSmithKline (unrelated to this work). This research project is supported by the NIHR Oxford Health Biomedical Research Centre (grant NIHR203316). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR or the UK Department of Health and Social Care.", "introduction": "The recent development of powerful Large Language Models such as GPT3.5 [2] and GPT4 [3] able to perform tasks in a zero-shot manner (i.e., without having been specifically trained or fine- tuned to do so) by being simply prompted with natural language instructions shows much promise for healthcare applications and the domain of mental health. Indeed, these models display more impressive general natural language processing abilities than their predecessors and excel at tasks such as Question Answering and Named Entity Recognition [4, 5, 6, 7]. Models with the ability to process social media content for indicators of mental health issues have the potential to become invaluable cost-effective tools for applications such as public health monitoring [8] and online moderation or intervention systems [9]. In addition, synthetic data produced by LLMs can be a cost effective and privacy-preserving tool for training task specific models [10]. There have been several studies aimed at assessing the abilities of LLMs to perform a range of tasks related to mental health on datasets derived from social media. Yang et al. [11] conducted a comprehensive assessment of ChatGPT (gpt-3.5-turbo), InstructGPT3 and LlaMA7B and 13B [12] arXiv:2404.16461v2 [cs.CL] 26 Apr 2024 on 11 different datasets and 5 tasks (mental health condition binary/multiclass detection, cause/factor detection, emotion detection and causal emotion entailment, i.e. determining the cause of a described emotion). They find that while the LLMs perform well (0.46-0.86 F1 depending on task), with ChatGPT substantially outperforming both LLaMA 7B and 13B, they still underperform smaller models specifically fine-tuned for each task (e.g., RoBERTa). Xu et al. [13] find similar results for Alpaca [14], FLAN-T5 [15] and LLaMA2 [16], with only fine-tuned LLMs able to perform on par with smaller, task-specific models such as RoBERTa [17, 18]. However, we find that previous studies suffer from the following shortcomings: 1. They focus on adult mental health 2. They focus on tasks with a closed (or finite) set of answers, where the model is asked to perform each task in turn 3. They do not investigate how LLMs perform on synthetic data, i.e., text they are asked to simultaneously generate and label There is growing consensus that we are facing a child mental health crisis [1]. Before the COVID-19 pandemic there was already increasing incidence of mental health conditions in children and young people (CYP), such as depression, anxiety and eating disorders [19] as well as rising rates of self-harm and suicidal ideation [20] and cyberbullying strongly linked to adverse mental health outcomes [21]. The advent of the pandemic accelerated this already precarious situation and created additional challenges [22, 23] such as discontinuity of healthcare service provision in addition to interruption to young people\u2019s usual engagement in education and their social lives. This age range is particularly vulnerable to onset of mental health issues, with half of conditions appearing by early adolescence and 10-20% of children and young people experiencing at least one mental health condition [24]. Females, those with low socioeconomic backgrounds, trauma, abuse or having witnessed violence [25] are at heightened risk. On the other hand, social media now forms an important part of children and adolescents\u2019 daily lives, whose impact on mental health is debated, with potential benefits (stress reduction and support networks [26]) as well as potential risks (sleep disturbance, self esteem issues and cyberbullying [27]). Regardless of their detrimental or protective impact, social media may contribute valuable insights into CYP\u2019s mental health, with opportunities for monitoring and intervention, for example identifying those at risk of depression and mood disorders [28]. Given the mental health of CYP is a particularly pressing public health concern, we wished to investigate how LLMs perform on extracting mental health factors when faced with social media content generated by young people aged 12-19. Indeed, several issues related to mental health either exclusively apply to children and adolescents (such as school bullying and ongoing family abuse) or are particularly prevalent in this age range (such as eating disorders [29] and self-harm [30]), making both content type and factors of interest distinct from those found in adult social media posts. In addition, previous studies focused on tasks which had either a binary or closed sets of answers (e.g., choosing between several given conditions or between several given causal factors). In contrast, we wish to examine how LLMs perform on a task of open information extraction, where they are given categories of information and asked to extract any which are found in the text (e.g., asked to detect whether there is any mental health condition indicated in the text). Furthermore, in previous studies the models were tested with each task in turn (e.g., asked to detect depression in one dataset, then detect suicidality in another dataset), whereas we gather and annotate our own dataset in order to be able to ask the LLMs to extract all categories simultaneously (e.g, extract all conditions and symptoms in a given sentence). Finally, to our knowledge there has been no investigation on how LLM performance compares when asked to annotate text as they generate it, i.e., how their performance on synthetic data compares with their performance on real data. There is growing interest in synthetic data for healthcare [31]. Given the potential for training models and running simulations and digital twin experiments with the benefit of reduced issues of data scarcity and privacy, we believe that our work will contribute to better understanding of limitations and benefits of using synthetic data for real-world tasks. 2" }, { "url": "http://arxiv.org/abs/2403.15885v2", "title": "STEntConv: Predicting Disagreement with Stance Detection and a Signed Graph Convolutional Network", "abstract": "The rise of social media platforms has led to an increase in polarised online\ndiscussions, especially on political and socio-cultural topics such as\nelections and climate change. We propose a simple and novel unsupervised method\nto predict whether the authors of two posts agree or disagree, leveraging user\nstances about named entities obtained from their posts. We present STEntConv, a\nmodel which builds a graph of users and named entities weighted by stance and\ntrains a Signed Graph Convolutional Network (SGCN) to detect disagreement\nbetween comment and reply posts. We run experiments and ablation studies and\nshow that including this information improves disagreement detection\nperformance on a dataset of Reddit posts for a range of controversial subreddit\ntopics, without the need for platform-specific features or user history.", "authors": "Isabelle Lorge, Li Zhang, Xiaowen Dong, Janet B. Pierrehumbert", "published": "2024-03-23", "updated": "2024-03-26", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "main_content": "2.1. Stance The word stance refers to the intellectual or emotional attitude or position of an author towards a specific concept or entity, such as atheism or the legalisation of abortion (Mohammad et al., 2016). This is different from the concept of sentiment as it is usually defined in sentiment analysis, where it refers to the overall emotion expressed by a piece of text. A given text can then have one sentiment value but express multiple positive and negative stances, the target of which is not necessarily explicitly mentioned in the text. Some concepts lend themselves more easily to the elicitation of stance. For example, consider the following quote from the Wikipedia section on Donald Trump: Donald John Trump (born June 14, 1946) is an American politician, media personality, and businessman who served as the 45th president of the United States from 2017 to 2021. Trump\u2019s political positions have been described as populist, protectionist, isolationist, and nationalist. He won the 2016 presidential election as the Republican nominee against Democratic nominee Hillary Clinton despite losing the popular vote. It would be strange to ask whether the author is in favour or against born, media, businessman, who, is, from, described, presidential or popular. On the other hand, the words in bold type 2 (with the exception of numerals, i.e. dates and ordinals) seem like more appropriate targets for holding an opinion. These words are generally referred to as Named Entities or NEs, a term first coined for the Sixth Message Understanding Conference (MUC-6) (Grishman and Sundheim, 1996). The category aims to encompass expressions which are rigid designators, as defined by Kripke (1980), i.e., which designate the same object in all possible worlds in which that object exists and never designate anything else. In other words, these expressions refer to specific instances in the world, including but not limited to proper names. Common NE categories are: organisations, people, locations (including states), events, products and quantities (including dates, times, percent, money, quantity, ordinals and cardinals). With the exception of the last category, most of these constitute valid targets for extracting author stance because, unlike other terms (e.g., verbs, adverbs, prepositions, some nouns and adjectives) they can be involved in debates and elicit diverging intellectual or emotional viewpoints. Communities tend to be created on the basis of shared traits which are either given (e.g., race, gender, nationality) or acquired (preferences and opinions). Thus, for many contentious issues, agreement and disagreement between individuals are likely to crystallise around attitudes towards a few key entities which define membership identity in a form of \u2018neotribalism\u2019 (Maffesoli, 1995). Stance can be modelled at the post or author level. Here we choose to leverage all posts from a known author to determine their stance toward a specific entity. 2Bold typed words are all word spans identified by Spacy (Honnibal and Montani, 2017) as named entities. 2.2. Signed Graphs Graphs, defined as combinations of nodes and edges, are useful abstractions for a variety of structures and phenomena. They can take several forms: directed (e.g., Twitter following) vs. undirected (e.g., Facebook friends); signed (e.g., likes and dislikes) vs. unsigned (e.g., retweets), homogenous vs. bipartite (with nodes of different types where there is no between-type edges, e.g., employees and companies they worked for). In the current paper, given we model user-entity stances, the graph constructed is a signed bipartite graph. While it is technically directed (the stance is from user towards entity), there are no edges in the opposite direction (i.e., from entity to user), thus we treat the graph as undirected for simplicity. Various methods have been developed for node representation in graphs. When no node features are available, methods relying on connectivity and random walks such as DeepWalk (Perozzi et al., 2014) and node2vec (Grover and Leskovec, 2016) can produce low-dimensional node embeddings using a similar algorithm to skip-gram in Word2vec, i.e., by predicting a node given previously encountered nodes. Graph Neural Networks (GNNs), on the other hand, can leverage both connectivity and node features from a local neighbourhood to produce node representations. Among these, Graph Convolutional Networks (GCNs) were first introduced by Kipf and Welling (2016) and constitute a popular option akin to a generalisation of Convolutional Neural Networks (CNNs), by performing a first-order approximation of a spectral filter on a neighbourhood. GCNs were originally designed to handle unsigned graphs. However, in the case of stance, as well as in many other applications related to social media, we encounter networks which are signed, i.e., which involve positive and negative edges. Processing these types of graphs and producing meaningful node representations is not straightforward, as there is an intrinsic qualitative difference between the two types of edges which cannot be optimally resolved by e.g., treating them alike, ignoring negative edges, or ignoring edges that cancel each other. One solution is to keep positive and negative representations separate from the graph neural network separate and simply concatenate them. Another way suggested by Derr et al. (2018) relies on assumptions from balance theory (Heider, 1946; Cartwright and Harary, 1956), which comes from social psychology and formalises intuitions such as \u2018an enemy of an enemy is a friend\u2019. Thus, for each layer l, the aggregation function would gather on the positive side not only friendly nodes, but friends of friends and enemies of enemies, and similarly on the negative side get information from enemies but also friends of enemies and enemies of friends. The positive and negative convolutions are then concatenated together to produce the final node representations as in the simpler model. In our experiments we test both the simple signed model and the model with additional aggregations based on balance theory. 3. Dataset We use the DEBAGREEMENT dataset for our experiments (Pougu\u00e9-Biyong et al., 2021), a dataset of 42894 Reddit comment-reply pairs from 5 different subreddits (r/Brexit, r/climate, r/BLM, r/Republican and r/democrats) with each pair given one of three labels: agree/neutral/disagree (see dataset statistics in tables 1 and 2). The pairs of posts were labelled by crowdsourcers who received intensive training on the issues discussed in the various subreddits. The disagreement prediction task consists in predicting which of the three labels describes the relation between the comment and reply posts. This is a very difficult task for a number of reasons. First, assessing disagreement around issues such as those discussed in the selected subreddits requires expert knowledge (hence the need for specific training of crowdsourcers). Second, there is a high level of subjectivity involved, which is evidenced by the \u2018clean\u2019 version of the dataset still containing over 60% labels where only 2 out of 3 crowdsourcers agreed. Because of the latter, we choose after examining the data to work with the portion of the dataset that was given the same label by all three crowdsourcers (16723 pairs of posts). Finally, it is worth noting that most previous works focusing on disagreement use Twitter data exclusively (e.g., Darwish et al., 2020; Trabelsi and Zaiane, 2018; Zhou and Elejalde, 2023). Many other platforms like Reddit lack network features such as common hashtags, user following and retweets which are highly useful for detecting endorsement between users. It is then much harder to create user representations indicative of polarisation. This emphasises the crucial need to find alternative features, such as user-entity stances, which can generalise across platforms. While the task has been tackled using user network features (Luo et al., 2023), no previous works have attempted to improve performance without leveraging user interaction features which may not always be available. 4. Framework 4.1. User-Entity Graph Construction Let G = (N, E) be a signed undirected bipartite graph where U \u2208N is the set of user nodes, A \u2208 r/Brexit r/climate r/BlackLivesMatter r/Republican r/democrats start date Jun 2016 Jan 2015 Jan 2020 Jan 2020 Jan 2020 agree 0.29 0.32 0.45 0.34 0.42 neutral 0.29 0.28 0.22 0.25 0.22 disagree 0.42 0.40 0.33 0.41 0.36 Table 1: DEBAGREEMENT statistics per subreddit and period comment-reply count avg length (comment) avg length (reply) r/Brexit 15745 45 40 r/climate 5773 43 41 r/BlackLivesMatter 1929 41 39 r/Republican 9823 38 35 r/Democrats 9624 38 37 Table 2: DEBAGREEMENT post counts and word lengths N is the set of entity nodes and E the set of edges between users and entities, with E+ the set of positive edges and E\u2212the set of negative edges. Since this is a bipartite graph, there are no edges between users or between entities, and the set of positive and negative edges are defined to be mutually exclusive (ie., there is at most one edge, either positive or negative, between a user and an entity) (see figure 2). Figure 2: Example user-entity graph. The network is signed, with each edge representing user stance towards an entity. We build the graph in the following way. First, we extract named entities for each comment and reply post using Spacy (Honnibal and Montani, 2017), discarding entities which pertain to the categories \u2018CARDINAL \u2019, \u2018DATE\u2019, \u2018ORDINAL \u2019, \u2018WORK_OF_ART\u2019, \u2018PERCENT\u2019, \u2018QUANTITY\u2019 and \u2018MONEY\u2019. Since we do not have ground truth for the stance of each author for each extracted entity, we devise an unsupervised method to obtain a proxy of it by leveraging Sentence-BERT (Reimers and Gurevych, 2019). For each entity, we create \u2018pro\u2019 and \u2018con\u2019 sentences using the templates I am for X and I am against X. We then compute the cosine similarity between each SBERT-embedded post sentence and each SBERT-embedded template sentence and subtract the \u2018con\u2019 cosine similarity from the \u2018pro\u2019 cosine similarity 3. Finally, we take the mean of all cosine differences for an entity across all user posts4. The advantage of this method is that it is almost entirely unsupervised and does not require prior domain knowledge or manual selection of relevant topics or entities (only the subreddit titles, see below), these naturally arise among the most frequent entities extracted from the corpus. Therefore, we define our stance measure as: stanceu,e = N X i\u2208P PM s\u2208S cos+ s \u2212cos\u2212 s |S| |P| (1) where stanceu,e is the stance of user u towards entity e, i \u2208P is the ith post contributed by user u, s \u2208S is a sentence in the post, cos+ is the cosine similarity of the post sentence with the \u2018pro\u2019 embedded template sentence and cos\u2212is the cosine similarity of the post sentence with the \u2018con\u2019 embedded template sentence. We notice a negative bias in our extracted cosine similarities whereby the mean \u00b5 of stance values lies around -0.02 and accordingly split edges into positive edges (stanceu,e >= \u00b5) and negative edges (stanceu,e < \u00b5) after verifying that the stances follow a normal distribution and the median is close to the mean. The statistics of the resulting graph can be seen in table 4. When examining the extracted entities, it appears that most entities which occur only a few times in the corpus will be irrelevant to our task. We thus apply a combination of two filters: we 3While this method has to our knowledge not been previously used, we manually examine the results for 100 sentences by calculating the stance for each named entity extracted using the method described and rating it as correct or incorrect and find satisfactory performance (0.68 accuracy). 4In addition, we also mean center all edge weight values by subtracting the mean. american, antifa, aoc, asian, backstop, bernie, biden, black, blm, brexit, brexiteers, brown, christian, cnn, communist, con, confederate, conservative, corbyn, cuomo, dem, democrat, democratic, dems, dnc, fascist, fbi, floyd, george, gop, greta, holocaust, jew, kkk, leave, leftist, liberal, libertarian, maga, marxist, mcconnell, moderate, moron, msm, muslim, nazi, party, patriot, pete, poc, progressive, propaganda, qanon, racist, referendum, remainers, republican, riot, romney, sander, senate, statue, tory, trump, tucker, warren, white Table 3: Extracted target entities |U| |A| |E+| |E\u2212| |D| |D(U)| |D(A)| |CN(U)| |CN(A)| train 7107 67 3997 4615 0.001 1.83 194 0.32 5.67 test 1513 67 863 866 0.002 1.48 37 0.20 0.60 Table 4: User-entity graph statistics for full training and test datasets. |U|: number of users; |A|: number of entities; |E+|: number of positive edges; |E\u2212|: number of negative edges ; |D|: graph density; |D(U)|: average degree (users); |D(A)|: average degree(entities); |CN(U)|: average common neighbors (users); |CN(A)|: average common neighbors (entities) keep entities which are amongst the 5000 most frequent entities and whose embeddings have a cosine similarity above 0.5 to at least one subreddit title embedding (Brexit, climate, BLM, Republican and democrats) (the embeddings used are the initial features for the GCN which are Word2vec embeddings trained on our dataset, see section Training). We obtain both values by conducting a sensitivity correlation analysis on the training set in the following way: we model a negative and positive entity vector for each author respectively as the sum of the negative stance and positive stance entities, concatenate them and measure the Kendall \u03c4 rank correlation between the cosine similarity of the author vectors for a given comments pair and the label (0,1 or 2 as disagreement, neutral and agreement). There is a clear peak in correlation with entities which have over 0.5 cosine similarity with at least one subreddit title and are within the 5000 most frequent entities, thus we select these as our threshold values. We also filter out multiword entities which often show redundancy and misextractions. The final set of 67 target entities can be seen in Table 3, a heatmap of cosine similarities to each subreddit can be found in Appendix A and a visualisation of the user-entity graph can be seen in figure 1. To get a fair assessment of our model and be able to directly compare it with the performance of the GCN model alone, we subset the training dataset to comment-reply pairs which mention at least one of our target entities (for other commentreply pairs the GCN would not have any features). The final dataset is made of 1770 comment-reply pairs. While this constitutes only 10% of the original full agreement dataset, we notice that disagreements which are most closely related to the subreddit\u2019s controversial topic will often contain the target entities 5 . We also believe that given the difficulty of the task (especially on Reddit data where many network features available on Twitter cannot be used), an improvement on a subset of disagreement types is worthwhile and holds promise for applicability. We do provide results for the subset of the dataset where only either comment or reply mentions one of our target entities, which constitutes about 40% of the full agreement dataset or 6174 comment-reply pairs, however we cannot run a comparison with the GCN model alone in this case. 4.2. STEntConv We adopt the Signed Graph Convolutional Network proposed in Derr et al. (2018) and modify it to integrate edge weights for our stance values so that the positive and negative convolutions are as follows: hB(l) i = \u03c3(WB(l)[ X j\u2208N + i hB(l\u22121) j |N + i | wj, X k\u2208N \u2212 i hU(l\u22121) k |N \u2212 i | wk, hB(l\u22121) i ]), (2) 5While our assumption that the model leverages information from entities not present in the text suggests we should be able to use posts which do not mention target entities, we find empirically that this is not the case. However, the better performance over the BERT baseline suggests the model does make use of information not present in the comment-reply pair. We hypothesise that this is because of a correlation between the presence of target entities in specific pairs of posts and the amount of additional information in the entity graph (ie., posts which do not contain target entities tend to come from authors for whom there is little/no entity information). hU(l) i = \u03c3(WU(l)[ X j\u2208N + i hU(l\u22121) j |N + i | wj, X k\u2208N \u2212 i hB(l\u22121) k |N \u2212 i | wk, hU(l\u22121) i ]), (3) where hB(l) i is the weighted aggregation of positive edges for layer l, P j\u2208N + i h B(l\u22121) j |N + i | wj is the weighted sum of \u2018friends of friends\u2019, P k\u2208N \u2212 i h U(l\u22121) k |N \u2212 i | wk is the weighted sum of \u2018enemies of enemies\u2019 and hB(l\u22121) i the previous layer\u2019s positive edges aggregation. Similarly, hU(l) i is the aggregation of negative edges for layer l, P j\u2208N + i h U(l\u22121) j |N + i | wj is the weighted sum of \u2018enemies of friends\u2019, P k\u2208N \u2212 i h B(l\u22121) k |N \u2212 i | wk is the weighted sum of \u2018friends of enemies\u2019 and hU(l\u22121) i the previous layer\u2019s negative edges aggregation. We run experiments both with the additional aggregations from balance theory and without (i.e., only aggregating direct friends for positive edges and direct enemies for negative edges), in which case the respective weighted aggregations are simply: hB(l) i = \u03c3(WB(l)[ X j\u2208N + i h(l\u22121) j |N + i | wj, h(l\u22121) i ]). (4) hU(l) i = \u03c3(WU(l)[ X j\u2208N + i h(l\u22121) j |N \u2212 i | wj, h(l\u22121) i ]), (5) This is also the definition of the aggregations for the first layer l = 1. We build the weighted version of the algorithm by adapting the unweighted Derr et al. (2018) implementation from PyTorch geometric (PyG) (Fey and Lenssen, 2019). The rationale for integrating edge weights to the convolutional layer is that, given our unsupervised method for calculating stance, a high absolute value is more reliable and thus considered more informative. Positive/negative edge and node features should then be weighted accordingly when performing message passing (e.g., a small edge weight is more likely to denote a stance close to neutral). The output of the GCN is concatenated to the output of a BERT layer for comment and reply posts and fed to a one-layer feed-forward network. The final model architecture can be seen in figure 3. 5. Baselines 5.0.1. BERT As a baseline, we fine-tune a BERT (base, uncased) layer as was originally used by Pougu\u00e9Figure 3: Model architecture. Biyong et al. (2021), i.e., we ablate the graph convolutional layer from our model and feed this to the linear layer for classification. 5.0.2. GCN only In addition, we conduct the opposite ablation, i.e., we use only the GCN to assess how the model performs relying only on the positive and negative edges of the stance graph without any access to the text of the posts. 5.0.3. StanceRel Luo et al. (2023) improve on previous results on the DEBAGREEMENT dataset by building a graph autoencoder and training it on a signed undirected user-user interaction graph which creates user representations based on their previous interactions (agreement or disagreement, i.e., positive or negative) and using the decoded user features along with textual features to detect disagreement. While a direct comparison with our approach may not be fully informative (as the two models use different features which may be available in different situations), we train their model on our subset of data and provide results on our test set to give a picture of the differential value of various user features. 5.0.4. FALCON We also test an instruct-trained version of Falcon (Almazrouei et al., 2023), specifically vilsonrodrigues/falcon-7b-instruct-sharded implemented through the transformers library (Wolf et al., 2019). Prompt and hyperparameters used can be found in Appendix B. 6. Training We train 100-dimensional word2vec6 embeddings on the full dataset and use the resulting vectors 6We also experiment with GloVe embeddings but the performance is worse. as initial features for our entities. User features are initialised as 100-dimensional vectors of zeros. We obtain contextual text embeddings for comment and reply posts through the transformers library (Wolf et al., 2019) implementation of a BERT (base, uncased) layer whose output we mean pool, excluding special tokens, and concatenate these together with the output of our weighted Signed Graph Convolutional Network layer before feeding this to a one-layer linear classifier. Since the classes are not entirely balanced, we compute class weights and weigh class loss accordingly during training. We use cross entropy as loss function, a batch size of 16, a hidden size of 300 for the first convolutional layer, a learning rate of 3e-5 and the Adam optimiser with weight decay 1e-5. We split the data into 0.80 train, 0.10 dev and 0.10 test and train for 6 epochs (models with BERT layers) and 11 epochs (GCN only). We experiment with number of convolutional layers (one versus two), type of aggregation (balance theory or only direct friends and enemies), edge weights (binary versus weighted) and sentences used to calculate stance (full post versus only sentences containing target entity). We train the models with three different random seeds and average the results. 7. Results Results can be found in Table 5. The best performing model uses one convolution layer, only direct friends and enemies, weighted edges, and the full text of the post for stance extraction. As can be seen, on the (c&r) subset the addition of the user stance graph information helps improve model performance by 7 points on average compared to the BERT baseline and 6 points over the StanceRel model which previously obtained the best results on this task. While the improvement in performance is weaker for the version of STEntConv trained on the (c|r) dataset (since this dataset includes authors for whom the model has no relevant stance information), the model still achieves a 3 point increase over the BERT baseline. In the non-multiple aggregation model, the boost from the stance graph information is lowest for r/Republican. This is consistent with additional analyses we conduct on the test dataset showing that the r/Republican subreddit has the highest ratio of target entities present in posts versus all entity information available for authors, meaning that there is little extra information from the GCN to be used and most relevant information is already available to the BERT model. Thus, it is likely that the better performance of our model is due to STEnTConv being able to leverage stance information about entities not present in the comment-reply pair being classified. Adding the second aggregation from (Derr et al., 2018) performs better on the r/Brexit and r/Republicans subreddits. This would tend to indicate that the additional aggregations for these subreddits remain relevant to the task whereas they introduce noise for r/climate. This is supported by r/climate subreddit having the lowest cosine similarity on average to the target entities. Falcon performs particularly poorly, in addition to requiring over an hour for inference on the test set (vs. a few seconds for BERT-based models and GCN). StanceRel, while above the BERT baseline, underperforms our model on the test set. Given that the model leverages user history which represents a strong additional signal, we expected it to perform better. The lower performance may be due to sparseness in the user interaction graph or to the model requiring a larger dataset to learn features from interaction history. In addition, the graph StanceRel uses is weighted by frequency which may be prone to biases (e.g., user A from the Green Party agreed with user B from the Green Party many times, but they disagreed on an issue involving a specific entity, e.g. French nuclear company AREVA). Finally, they leverage the assumptions from balance theory which we show may not be useful for this particular task. Unsurprisingly, the neutral class is hardest to classify, with an f1 of 0.4 versus 0.8 for the agree/disagree classes. Examining the data indicates one reason for this might be the heterogenous nature of the class, with some neutral pairs of posts discussing unrelated topics and others agreeing and disagreeing in equal amounts. The confusion matrix between classes can be seen in figure 4. Figure 4: Confusion matrix (STEntConv) r/Brexit r/climate r/Republican r/democrat r/BLM* all (sd) (c|r) BERT .75 .79 .73 .69 .72 .72 (0.03) (c|r) STEntConv .78 .78 .76 .71 .75 .75 (0.02) (c&r) FALCON .40 .25 .45 .38 1.0 .42 (0.28) (c&r) BERT .58 .54 .69 .63 .67 .64 (0.06) (c&r) StanceRel .67 .30 .67 .60 1.0 .65 (0.22) (c&r) STEntConv (GCN) .36 .44 .44 .37 .67 .43 (0.11) (c&r) STEntConv (m.agg) .70 .41 .73 .69 1.0 .70 (0.18) (c&r) STEntConv .62 .64 .70 .74 1.0 .71 (0.14) Table 5: Macro averaged F1 for each model and subreddit. STEntConv = our model enhanced with entity stances; BERT= BERT model (base, uncased); StanceRel = relation graph model from Luo et al. (2023) FALCON: Falcon model (instruct trained, 7B); GCN = STEntConv without BERT component; m.agg: multiple aggregations, i.e. using the \u2018friend of friend\u2019 additional aggregation from Derr et al. (2018). (c&r) = dataset with target entity in comment and reply; (c|r) = dataset with target entity in comment or reply. Best in bold.*The (c&r) test set only contained one comment-reply pair from the r/BLM subreddit. 8. Related Work There have been a number of studies aimed at modelling stance and disagreement. Regarding stance, a supervised task was introduced at SemEval 2016 along with a small dataset of 4870 tweets annotated for stance against five topics (Atheism, Climate Change is a Real Concern, Feminist Movement, Hillary Clinton and Legalization of Abortion) (Mohammad et al., 2016). Winning teams used CNNs and ensemble models. However, the limited data and specificity of both style and topics make it difficult to extend a model trained on this data to detecting stance in other domains. Regarding unsupervised methods for both stance and disagreement, most methods have focused on modelling users through dimensionality reduction and clustering of content (Darwish et al., 2020) or through platform-specific (Trabelsi and Zaiane, 2018; Zhou and Elejalde, 2023) or non platform-specific (Luo et al., 2023) network features from user-user interactions. To the best of our knowledge, no previous method used a userentity graph to model user representations. Importantly, as stated in the introduction, most previous works use Twitter data which contains platform-specific network features. Thus, our work aligns with other efforts to build alternative network features in polarised communities when user endorsement features (e.g., retweets and follows) are not available, as is the case for Reddit data. For example, Hofmann et al. (2022) build a graph of edges between social entities (concepts and subreddits) to identify the level of polarisation for a given concept. In addition, even previous works which do not use platform-specific data still rely on features which may not always be available, such as user interaction history (Luo et al., 2023) and restrict ability to use graph features to known users. By contrast, our method can leverage entity stances at inference time to model novel users which have similar allegiances to users seen in training. To our knowledge, the current paper is the first to use representations from a signed graph between users and entities for the purpose of predicting disagreement. 9. Conclusion We presented a simple, unsupervised and domainagnostic pipeline for creating graph user features to improve disagreement detection between comment-reply pairs of social media posts. We ran several experiments against baselines and performed ablations to examine the contribution of model components and parameters. Our model uses GCN convolutions over a signed bipartite graph of automatically extracted user-entity stances. STEntConv can be leveraged to create comprehensive user representations which take into account stance towards various target entities in order to better predict whether two users are likely to agree or disagree on a range of controversial topics, regardless of availablity of platformspecific network features or user interaction history. As a next step, this method could easily be extended to target entities beyond named entities to include common nouns which are particularly relevant to the controversial topic, especially in cases where the topic is less likely to involve named entities. 10. Limitations Scale. We acknowledge that the improvement we demonstrate over the baseline applies to only a subset of the original dataset. However, given the difficulty of the task and the lack of additional network features for Reddit data we believe this improvement is still worthwhile. Furthermore, our method could potentially be extended to include stance towards relevant common nouns in addition to named entities. Domain. While the dataset we used covers a range of controversial topics from socio-cultural to political issues, it was extracted from 5 specific subreddits and thus it is uncertain to what extent our results would apply to other topics of disagreement and whether the model could be generalised to other domains. However, we note that our pipeline for extracting relevant entities is entirely domainagnostic and thus we believe it could be applied successfully to any forum debating a controversial topic. Acknowledgements The authors would like to acknowledge support from the EPSRC (EP/T023333/1). X.D. acknowledges support from the Oxford-Man Institute of Quantitative Finance. This study was supported by EPSRC Grant EP/W037211/1. 11.", "introduction": "Social media now form an integral part of many people\u2019s lives. While these tools have allowed users unprecedented access to shared content, ideas and views across the world, they have also permitted the fast rise and spread of harmful forms of communication, such as fake news, abuse and communities acting as radicalising echo chambers at unseen scales (Terren and Borge, 2021). It is then of high interest to investigate the polarisation of opinions as a reflection of ever-shifting politi- cal and socio-cultural dynamics which have direct impact on society. For example, detecting disagree- ment between users can help assess the contro- versiality of a topic, give insight into user opinions which would not be obtainable from their post in isolation or provide a way to estimate numbers for sides of a debate. Online communities constitute an ideal terrain for this investigation, as they are likely to foster vari- ous tensions and debates and allow researchers to examine them in real time or longitudinally (Alkhal- ifa and Zubiaga, 2022). Previous work on detect- ing disagreement has focused on supplementing textual information with user network information, either gathered through platform-specific features such as Twitter\u2019s following system, retweets and hashtags (which cannot be generalised across plat- forms) (e.g., Darwish et al., 2020 or through user- user interaction history (which is not necessarily available) (e.g., Luo et al., 2023). Instead, to the best of our knowledge we are the first to represent users through a user-entity signed graph weighted Work completed while Li Zhang was a postdoctoral assistant at the University of Oxford. by stance. In addition to being generalisable to any platform and not requiring user interaction his- tory, our method has the potential to provide more explainable representations for users by explicitly tracing disagreements back to entities they feel positively or negatively about. Furthermore, the graph can easily be adapted to various controver- sial topics by selecting entities relevant to that topic, is able to accommodate different amounts of infor- mation per user, and a user-entity signed network constitutes a natural and explicit representation of polarising allegiances (cf. figure1). Finally, we derive stance towards entities in an unsupervised manner which means there is no need to obtain costly manual labels. We make the choice to use BERT both for unsu- pervised stance detection and for the textual part of the model itself. This choice is partially to be able to directly assess the additional contribution of our signed network to the former best model from Pougu\u00e9-Biyong et al. (2021) for this dataset, but also because we build it with the view of a rela- tively lightweight model with potential for real time applications. In addition, large language models have been shown to underperform smaller state- of-the-art fine-tuned models on specific tasks, for example in the biomedical domain (Ateia and Kr- uschwitz, 2023). However, we do provide results for the performance of an open-source large lan- guage model (Falcon, Almazrouei et al. 2023) on the task as a comparison. Our main contributions are as follows1: 1) We offer a simple, unsupervised method to 1We make all our code and data available at https: //github.com/isabellelorge/contradiction arXiv:2403.15885v2 [cs.CL] 26 Mar 2024 Figure 1: User-entity graph visualised with Gephi (Bastian et al., 2009) (positive edges). We ap- ply a force atlas layout. Pink nodes are entities, blue nodes are users which we can see clustered around the target entities they expressed a positive stance for. extract user stances towards entities by leveraging sentence-BERT. 2) We build a model using a weighted Signed Graph Convolutional Network on a user-entity graph with BERT embeddings to detect disagree- ment, improving on previous state-of-the-art results on a dataset of Reddit posts. 3) We present various model ablation studies and demonstrate the robustness of the proposed framework. We start by outlining current research regarding stance and signed graphs which is relevant to our task. We then move on to describing the dataset used and graph extracted from our data, the archi- tecture and various parameters of our model. We finally present experimental results and discuss their implications." }, { "url": "http://arxiv.org/abs/2402.07645v1", "title": "Detecting the Clinical Features of Difficult-to-Treat Depression using Synthetic Data from Large Language Models", "abstract": "Difficult-to-treat depression (DTD) has been proposed as a broader and more\nclinically comprehensive perspective on a person's depressive disorder where\ndespite treatment, they continue to experience significant burden. We sought to\ndevelop a Large Language Model (LLM)-based tool capable of interrogating\nroutinely-collected, narrative (free-text) electronic health record (EHR) data\nto locate published prognostic factors that capture the clinical syndrome of\nDTD. In this work, we use LLM-generated synthetic data (GPT3.5) and a\nNon-Maximum Suppression (NMS) algorithm to train a BERT-based span extraction\nmodel. The resulting model is then able to extract and label spans related to a\nvariety of relevant positive and negative factors in real clinical data (i.e.\nspans of text that increase or decrease the likelihood of a patient matching\nthe DTD syndrome). We show it is possible to obtain good overall performance\n(0.70 F1 across polarity) on real clinical data on a set of as many as 20\ndifferent factors, and high performance (0.85 F1 with 0.95 precision) on a\nsubset of important DTD factors such as history of abuse, family history of\naffective disorder, illness severity and suicidality by training the model\nexclusively on synthetic data. Our results show promise for future healthcare\napplications especially in applications where traditionally, highly\nconfidential medical data and human-expert annotation would normally be\nrequired.", "authors": "Isabelle Lorge, Dan W. Joyce, Niall Taylor, Alejo Nevado-Holgado, Andrea Cipriani, Andrey Kormilitzin", "published": "2024-02-12", "updated": "2024-02-12", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "main_content": "A number of studies have examined the use of machine learning (ML) techniques to directly identify treatment response (Perlis, 2013; Nie et al., 2018). Several of these re-use the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) cohort dataset (Rush et al., 2006), where four successive treatment steps were administered, patients having the option of exiting the study if they experienced sufficient remission of symptoms after a given step. These ML studies identified predictive features including depression severity, the presence of co-morbid physical illness, psychosis, Post-Traumatic Stress Disorder (PTSD), anxiety disorder alongside minority ethnic heritage, work and poor social adjustment. Other factors identified in later studies include recurrence of depressive episodes, age, response to a first antidepressant, suicidality, educational attainment and occupational status (Kautzky et al., 2017, 2019). Another recent study uses age, gender, race, diagnostic codes (including both ICD-9 and ICD-10), current procedural terminology codes and medications with a tree-based algorithm (Lage et al., 2022). These studies achieved relatively good performance using traditional ML models such as random forests, GBDT (gradient-boosted decision trees) and logistic regression, however they use data from structured data fields in electronic health records (EHRs) and focus on delivering a binary TRD outcome. In addition to the above studies using structured data fields, a few studies have used terms extracted from narrative EHR notes, notably Perlis et al. (2012), who leverage regular expressions to extract terms identified by clinicians as important for prediction and compare the outcome of a model trained with billing data to a logistic regression model trained with concepts extracted from narrative notes, finding that the latter performs substantially better. Similarly, Sheu et al. (2023) use a variety of deep learning models with features from both categorical variables and terms extracted from narrative notes using regular expressions (pattern matching) as well as Longformer vectors of the clinical note history to predict treatment outcome. Thus, previous works leveraged either structured categorical data (e.g., registered comorbidity, sociodemographic factors etc.), overall scores on standardised questionnaires, terms extracted from notes using rule-based techniques such as regular expressions or, in the case of Sheu et al. (2023), a nonexplainable vectorised representation of the full patient history. An alternative is to use Natural Language Processing (NLP) models trained to extract information from the narrative clinical notes (Vaci et al., 2020; Kormilitzin et al., 2021). For example, recent work by Kulkarni et al. (2024) uses a BERT model for extracting suicidality and anhedonia in mental health records using ground-truth manually annotated EHR data. NLP models are both flexible and ecologically valid, and have been increasingly used successfully for various applications in psychiatry (Vaci et al., 2021; Senior et al., 2020) and clinical neuroscience in recent years (Le Glaz et al., 2021; Liu et al., 2022). The models can be trained either to directly predict a phenotype based on treatment outcome using text features as in Sheu et al. (2023) or, as in the current study, to extract relevant features suggestive of a phenotype or syndrome (e.g., in the case of DTD, a history of abuse, self-harm and suicidality) to present to clinicians for decision making. However, there are significant hurdles to obtaining training data, such as data scarcity and the high financial and time cost of manual annotations. 3 Difficult-to-treat depression Difficult-to-treat depression or DTD is a more recent framework than TRD which was developed following a number of issues becoming apparent with using the latter concept. As described in McAllister-Williams et al. (2020): \u2022 The current definition of TRD largely ignores psychotherapy and neurostimulation treatments \u2022 The definition does not allow for differential levels of success in response or remission \u2022 The phrasing could imply treatment failure is a property of the patient, rather than inadequacy of the intervention \u2022 The term implies a medical model which may exclude social and environmental factors previously shown to be significant predictors of treatment response For these reasons, the more inclusive and flexible concept of difficult-to-treat depression was put forward by an international consensus group and a number of factors were identified through literature review and expert consensus, grouped under the categories of PATIENT, ILLNESS and TREATMENT factors (McAllister-Williams et al., 2020). Given the novelty of the framework, there have been few attempts to operationalise it, with the exception of Costa et al. (2022) who partitioned patients from five specialist mental health National Health Service (NHS) Trusts according to a criterion encompassing both recurrence and resistance (at least 4 unsuccessful treatments including two antidepressant medications) and analysed correlations with environmental and clinical factors, confirming previous findings. In the current work, we use the concept of difficult-to-treat depression rather than treatment-resistant depression. 4 Large Language Models and synthetic data for medical and mental health research The recent development of large language models (LLMs) with substantially increased size and capabilities allowed great strides of improvement across domains such as question answering and text summarisation, including in the biomedical domain (Agrawal et al., 2022; Hu et al., 2023; Liu et al., 2023; Tang et al., 2023a; Taylor et al., 2023); becoming potentially able to perform information extraction on large quantities of data. However, privacy considerations prevent directly feeding highly-confidential patient data into LLM APIs such as OpenAI\u2019s chatGPT (Brown et al., 2020). In addition, while LLMs show impressive performance in applications related to text generation, they still fall short of specifically trained SOTA systems on biomedical NLP tasks (Ateia and Kruschwitz, 2023). A recent paper demonstrated that even when fine-tuned on target tasks, a LLaMA model, (Touvron et al., 2023) orders of magnitude bigger (7B vs 100M) and requiring substantially more compute power still underperforms compared to BERT-based discriminative models by up to 10% (Yang et al., 2023). For these reasons, a paradigm is emerging whereby a smaller local domain or task-specific model is fine-tuned on synthetic labelled data generated by an LLM, mitigating concerns of privacy as well as efficiency. This paradigm has been used successfully for diagnosis and mortality prediction by Kweon et al. (2023), who generated synthetic clinical notes using GPT3.5-turbo (Brown et al., 2020) and trained a domain specific LlaMA model, and for Named Entity Recognition and Relation Extraction by Tang et al. (2023b), who used prompts involving named entities and gene/disease relations from PubMed and fine-tuned BERT family models. Another recent work uses LLM-generated synthetic data to augment gold annotated data training a Flan-T5 model to perform multilabelling of sentences for social determinants of health (SDoH) such as housing, employment, transportation, parental status, relationship status and social support (Guevara et al., 2024). However, perhaps due to the small amount of added synthetic data (1800 synthetic sentences added to over 30k gold sentences), the synthetic addition does not lead to substantial or consistent improvement in performance (in fact it sometimes worsens it) and the performance training with synthetic data only is extremely poor (< 0.1 F1). In the domain of psychiatry, another study augmented training data with LLM-generated interviews and used traditional machine learning classifiers for binary classification of Post-Traumatic Disorder (PTSD) with a 10% increase in performance (Wu et al., 2023). The generated synthetic data has the potential to mimic important statistical properties and patterns of real data while avoiding the expensive and effortful process of obtaining large quantities of labelled data. Therefore, we intend to attempt training a DTD feature span extraction model exclusively on synthetic data obtained from LLMs. 5 Aims Previous studies used machine learning techniques to predict treatment response or identify treatmentresistant depression. In contrast, the more comprehensive concept of difficult-to-treat depression allows us to leverage a wider variety of factors \u2013 rather than relying on a strict \u2018two-course\u2019 acutephase response to pharmacological interventions \u2013 that potentially enables early detection and more personalised care that addresses reasons for suboptimal treatment response. Recent research has indeed been focusing on early detection and linking prognostic factors with a continuum of treatment response (Lage et al., 2022; Sheu et al., 2023). Furthermore, previous works mostly focused on sentence classification rather than the more complex process of span extraction. Contrary to multilabelling, span extraction presents clinicians with the specific part of the text which is linked to a particular label, allowing them to focus more efficiently on relevant information. Finally, to our knowledge no previous study successfully trained a model exclusively on synthetic data for the purpose of extracting prognostic factors. Success, even partial, would represent an extremely important step in developing AI applications for healthcare, given the known issues of data scarcity and manual labelling costs. Therefore, the present paper aims to: \u2022 Add to the recent body of works demonstrating the utility of leveraging synthetic data for decision-supporting information extraction in biomedical domains \u2022 Introduce an annotation scheme (abductive annotation) that leverages domain experts on narrative clinical notes to facilitate explicit extraction of PATIENT, ILLNESS and TREATMENT related-factors for DTD \u2022 Build a curated synthetic dataset of annotated narrative clinical notes which can be freely shared with the research community \u2022 Build and train a model which extracts spans of text and labels them with the relevant factor with the goal of downstream clinician decision guidance 1 6 The DTD Phenotype In consultation with a clinician expert, we operationalise the prognostic factors originally reported in McAllister-Williams et al. (2020) into the annotation schema presented in Table 1. There are several reasons for the changes made. First, we aim to develop a schema which could potentially be extended to other conditions or strata of populations (e.g. we created general categories for physical and mental comorbidities). Second, because we leverage the use of a large language model for generating and annotating data, we endeavour to create labels which are semantically transparent enough 1The code and synthetic data are available at https:// github.com/isabellelorge/dtd that they are likely to be understood by the model and yield better annotation accuracy. Finally, When a clinician interprets the data contained in a patient\u2019s EHR (e.g., to establish a diagnosis) they will employ abductive (rather than inductive or deductive) reasoning (Douven, 2021; Rapezzi et al., 2005; Altable, 2012; Reggia et al., 1985) meaning they will seek evidence to support or refute a number of competing hypotheses (e.g., differential diagnoses). Consequently, we wish to be able to extract both positive and negative evidence in favour or against a designation of difficultto-treat depression in order to provide clinicians with a comprehensive picture of the patient\u2019s current presentation, illness and treatment history for downstream decision making. 7 Synthetic dataset In the first instance, we prompt ChatGPT (GPT3.5turbo-0613) through its Python API to generate and annotate a dataset of 1000 clinical notes with labels from our annotation schema (e.g., [PATIENT_FACTOR(POSITIVE):older_age]). We used a temperature of 1.2 and default values for remaining parameters. Experimenting with various prompts, we find that the best prompts balance the need to provide examples to correct recurrent mistakes in model\u2019s behaviour against the tendency for the model to frequently repeat given examples, even with a high temperature parameter to encourage diversity in token generation. We thus add examples as needed to correct various errors (tendency to not label age, to output only positive labels, to group all labels after the sentence rather than after the relevant span, etc.). An example generated annotated note can be found in Appendix A and the final prompt used can be found in Appendix B. We notice from manual inspection of examples that the output is relatively accurate, though the model does output errors, meaning that the labels are noisy and should be considered as a silver rather than gold standard. We use a combination of regular expressions and heuristics to extract labels from each sentence, discarding labels which do not fit our schema (in a small proportion of notes the model hallucinates new factors or uses a different formatting). We first experiment with a syntactic heuristic to extract shorter spans (labelling the closest finite verb phrase with each label) but eventually settle on a simpler heuristic of labelling all text beFactor Orig. New NO_ANNOTATION 10708 35884 family_member_mental_disorder_POS 1188 2337 childhood_abuse_POS 1052 2116 non_adherence_POS 954 1919 side_effects_POS 883 1768 recurrent_episodes_POS 876 1774 multiple_antidepressants_POS 862 1744 multiple_psychotherapies_POS 860 1726 physical_comorbidity_POS 824 1673 long_illness_duration_POS 796 1603 severe_illness_POS 732 1447 anhedonia_POS 731 1449 suicidality_POS 706 1396 antidepressant_dosage_increase_POS 705 1420 multiple_hospitalizations_POS 668 1345 older_age_POS 545 1138 mental_comorbidity_POS 473 941 improvement_POS 403 786 substance_abuse_POS 400 787 illness_early_onset_POS 382 751 substance_abuse_NEG 314 787 multiple_hospitalizations_NEG 247 1375 suicidality_NEG 160 1178 older_age_NEG 140 960 physical_comorbidity_NEG 135 1133 abuse_POS 94 187 improvement_NEG 78 721 mental_comorbidity_NEG 64 680 non_adherence_NEG 60 709 abuse_NEG 53 690 childhood_abuse_NEG 42 805 family_member_mental_disorder_NEG 34 996 side_effects_NEG 29 701 antidepressant_dosage_increase_NEG 18 506 multiple_psychotherapies_NEG 16 746 multiple_antidepressants_NEG 11 587 long_illness_duration_NEG 7 567 severe_illness_NEG 6 488 recurrent_episodes_NEG 6 718 illness_early_onset_NEG 5 382 anhedonia_NEG 4 541 Table 1: Annotation schema labels and span counts for original and new (final) datasets. We shorten the polarity words in the table for space considerations (the labels used in prompts and GPT3.5-turbo outputs use full words, e.g., [PATIENT_FACTOR(POSITIVE):older_age]). fore each label up to sentence boundary (e.g., for \u2019XXXXX [label_X] YYYYY [label_Y]\u2019, we extract XXXXX as span for label X and YYYYY for label Y). This is because early prompt attempts showed that when prompting chatGPT for span boundaries (e.g., \u2019YYYYY{XXX} [label_X]\u2019), the boundaries obtained were very error-prone and unreliable, while a strategy of simply asking the model to insert the label after each relevant span yielded much better results. From the span counts in Table 1, it can be seen that the label distribution is heavily skewed, with a very strong bias in favour of positive factors against negative factors. We believe this would reflect clinicians\u2019 annotations, as positive evidence is much more likely to be expressed and noticed than negative evidence and arguably contributes more heavily to final decision making. However, to reduce data imbalance we prompt the model for another 1000 notes with the same prompt and a third set of 1000 notes with a prompt asking exclusively for negative labels. The final dataset thus contains 3000 notes which yield 75094 sentences. The updated factor counts of the final dataset can be seen in Table 1 and the negative prompt can be found in Appendix C. We explore the word distribution for each factor to try and get a sense of the synthetic dataset\u2019s diversity and/or repetitiveness. To achieve this, we extract words from labelled spans for each label and calculate their TF-IDF score, defined as: tfid f_score = (wl/Wl)/ log(w/wl + 1) (1) Where wl is the frequency of a given word for a given label, Wl is the total number of words for the label and w is the total frequency for the word across labels. A sample of labels and their 10 highest scoring words can be seen in Table 2, along with their prevalence (number of occurrences/number of label spans). Interestingly, while there is a tendency for the model to repeat examples given in the prompt, the extent of this behaviour varies substantially depending on specific examples. Indeed, while our prompt example for family_member_mental_disorder_POSITIVE mentions anxiety, the most frequent disorder which appears for this label is bipolar disorder (48% of spans), whereas an overwhelming 80% of spans labelled with mental_comorbidity_POSITIVE do include the word \u2018anxiety\u2019 without having been prompted with it. Similarly, our prompt example for childhood_abuse_POSITIVE mentions motherly abuse but only 19% of spans with this label contain the word \u2018mother\u2019, superseded by 34% of spans mentioning the word \u2018father\u2019. The physical_comorbidity_POSITIVE factor is more evenly distributed than mental_ comorbidity_POSITIVE, with most frequent conditions split between hypertension, diabetes and fibromyalgia. The model displays strong biases which are not driven by prompt examples, as evidenced by 70% of substance_abuse_POSITIVE spans mentioning the word \u2018alcohol\u2019. We hypothesise that the above biases result from the distribution of the model\u2019s training data. We also note that spans from some labels have a high probability of mentioning all label words explicitly: abuse, both NEGATIVE (91%) and POSITIVE (80%), side_effects, both POSITIVE (88%) and NEGATIVE (84%), improvement, both POSITIVE (84%) and NEGATIVE (73%), substance_abuse_NEGATIVE (84%) and childhood_abuse, both NEGATIVE (76%) and POSITIVE (75%). There is a tendency for the model to mention label words explicitly more often for negative than positive labels, probably due to higher variety for spans providing positive evidence. Only 10% of spans for physical_comorbidity_POSITIVE and mental_comorbidity_POSITIVE mention label words explicitly, suggesting that the model still relies on its training data for phrasing and does not systematically repeat prompt labels. The numbers for all labels can be seen in Figure 3 in Appendix D. Finally, to assess the similarity in word overlap between pairs of spans for a given label we calculate the Jaccard similarity between lemmatized words of each pair of spans within a label. The similarities range from 0.02 for spans with no label to 0.35 for substance_abuse_NEGATIVE. The full numbers for all labels can be seen in Figure 4 in Appendix D. 8 Real-World clinical data To evaluate the performance of the developed model on real clinical data, we utilised a sample of de-identified secondary mental health records from the Oxford Health NHS Foundation Trust, which provides mental healthcare services to approximately 1.2 million individuals across all ages in Oxfordshire and Buckinghamshire in England. Access to the de-identified data was obtained through the Clinical Record Interactive Search (CRIS) system powered by Akrivia Health, which enables searching and extraction of de-identified clinical case notes across 17 National Health Service Mental Health Trusts in England. For this study, we sampled clinical summaries for 100 adult patients over 18 years old, randomly selected from 19,921 patients with confirmed diagnosis of depression (ICD-10 codes F32 and F33) readily available from structured data fields in CRIS. Access to and use of de-identified patient records from the CRIS platform has been granted exemption by the NHS Health Research Authority for research reuse of routinely collected clinical data. Factor Words % family member mental disorder family bipolar history disorder mental mother illness diagnosed reports sister 0.72 0.48 0.87 0.65 0.68 0.44 0.65 0.39 0.41 0.16 childhood abuse childhood abuse emotional neglect father history physical experienced discloses mother 0.83 0.95 0.56 0.30 0.34 0.46 0.35 0.22 0.11 0.19 physical comorbidity hypertension physical diabetes comorbidities pain chronic fibromyalgia medical comorbidity history 0.38 0.62 0.29 0.31 0.19 0.22 0.14 0.25 0.24 0.41 mental comorbidity anxiety generalized disorder comorbid also mental panic diagnosis diagnosed comorbidity 0.80 0.47 0.77 0.28 0.31 0.23 0.08 0.14 0.16 0.15 substance abuse alcohol substance abuse history use mechanism patient specifically cope also 0.70 0.82 0.85 0.53 0.20 0.10 0.34 0.17 0.10 0.23 Table 2: Sample factors with their 10 highest TFIDF scoring words and their % or prevalence (n occurrences/n spans). The project was reviewed and approved by the Oversight Committee of the Oxford Health NHS Foundation Trust and the Research and Development Team. 9 Task We extract character indexes of start, end and corresponding label for labelled spans in each sentence and remove label text from sentence text. There are 41 labels (positive and negative polarity label for each factor and a \u2019NO_ANNOTATION\u2019 label). 10 Models Figure 1: Span-level model architecture. We experiment with a variety of models trading off complexity and granularity. 10.1 Token-level The first model simply leverages a BERT (base, cased) layer for classification at the token level, thus spans are represented as contiguous series of tokens labelled with a specific label. We use one linear layer and no IOB flags (given that each span starts where the previous span ends). The output of the classification layer for each token is fed to a softmax activation layer which outputs probabilities for each label. This is similar to the traditional technique used for Named Entity Recognition (NER). We use a custom Pytorch class and Huggingface\u2019s transformers library (Wolf et al., 2019) implementation for the BERT layer. 10.2 Span-level Given the longer lengths of our spans compared to traditional NER tasks, we also experiment with a model inspired from Question Answering (QA) systems, where the model does not predict a label for each token in the sequence but instead predicts a \u2018start\u2019 and \u2018end\u2019 position. However, traditional Question Answering systems are generally limited to one response span, restricting the scope of the classification task, while in our case there can potentially be any number of spans within a given sequence, each of which could be of any length, expanding the training search space exponentially. To solve this issue, we follow Hu et al. (2019) and take inspiration from computer vision by using a variant of Non-Maximum Suppression (NMS), whereby non-overlapping outputs are selected in decreasing order of confidence. To achieve this, a separate classifier is trained to predict the number (N) of spans within a sequence from the sequence output of a BERT (base, cased) layer, and subsequently the top N non-overlapping start/end pairs with the highest combined probabilities are selected. We find that given start and end of sentences consistently have very high probabilities for our dataset (since spans are full sentences in cases where there is a single factor in the sentence), a greedy approach on combined start/end probabilities as used by Hu et al. (2019) does not work. Instead, we predict the number (N) of spans and separately select the top N starts and N ends with highest probabilities, which we order by token number so that we select non-overlapping start/end combinations in order by taking the closest end for each start (e.g., if 2 predicted spans with top starts [1, 10] and top ends [20, 10], the indexes are ordered so that the first span is not [1, 20] but [1, 10], and the second span is [10, 20]). The model goes through each selected start/end pair and uses a linear layer span classifier to label the relevant tokens from the sequence. For training (given possible inconsistencies between true and predicted number of spans), the output of the span classifier is passed through a softmax layer and all predicted spans are max pooled into a single vector to match the ground truth multilabel one-hot encoded vector for the full sentence (soft selection). At inference time, the indices of the maximum probability for each predicted span are taken as labels (hard selection). We also experiment with a version of the model where factors are merged together across polarities and a separate classifier is trained to predict negative polarity from a concatenation of the BERT sequence output and predicted labels 2. As for the token-level model, we implement a custom class and loss function in Pytorch and leverage Huggingface\u2019s implementation of the BERT layer. The architecture of the 2The results are substantially worse and in the interest of space considerations we do not report them. span-level model can be seen in Figure 1. 10.3 Sentence-level Finally, we also model the task in a simplified way as a multilabel sentence classification task. For this, we again leverage the sequence output of a BERT (base, cased) layer which we feed to a linear classifier layer. The output of the classifier is then passed through a sigmoid activation layer and labels with probabilities above a 0.5 threshold are taken as predictions. In this case the start and end positions of spans for the predicted labels are unknown. Again, we use Huggingface\u2019s BERT layer implementation and a custom Pytorch class for classification with one linear layer. 11 Training We split the synthetic dataset into train, development and test sets with proportions 0.8, 0.1 and 0.1. We use a batch size of 16, learning rate of 3e-05, dropout rate of 0.1 and weight decay of 0.001. We experiment with different forms of class weights to mitigate the imbalance in labels. For the span model, we find the model performs better with a logarithmically scaled class weighted loss, whereas there is no difference for the token level model, and the sentence-level model performs better without class weights. We train until convergence, 4 epochs for span, 5 epochs for the token model and 7 epochs for the sentence-level model. 12 Experiments 12.1 Synthetic data We first test the model on a held-out test set of synthetic data. 12.1.1 Results The results on the synthetic test set for the three models can be seen in table 3 and a log-scaled confusion matrix can be seen in Figure 2 3. Given the imbalance in labels, we present precision, recall and F1 for each class as well as F1 averaged across classes, with macro averaged F1 being the accepted standard metric in similar tasks. The three models 3While we used simplified terms in our labels to ensure the best LLM output after prompting trial and error, the appropriate medical terms should be used in the final output of the model for the following: episodes severity for severe illness, episode remission for improvement, medical comorbidity for physical comorbidity, substance use for substance abuse, adequate dosage for antidepressant dosage increase and adherence medication for non-adherence. perform fairly well. We find our custom span-level model outperforms the simpler, token-level model by a large margin (0.65 vs 0.57). To our surprise, we also find that our span level model slightly outperforms the sentence-level classification model, despite the added difficulty of having to predict the correct start and end of spans in order to be able to accurately predict labels. The models tend to perform substantially better on the positive classes. The sentence and span level models perform similarly across factors, each claiming best performance for an approximately equal number of factors, whereas the token-level model only outperforms other models for the abuse_POSITIVE class. There appears to be little difference between the performance for the different factor domains (PATIENT, ILLNESS and TREATMENT). We notice that the model struggles with the category \u2018older age\u2019, which points to a well-known shortcoming of language models (along with negation) regarding ability to count and evaluate quantities. The confusion matrix in Figure 2 shows how the category gets confused with most other classes (lit up vertical line). The worst performing positive classes is abuse_POSITIVE (most likely due to scarcity of training examples) and worst performing negative class is improvement_NEGATIVE. Examination of synthetic data reveals GPT3.5 particularly struggled with correctly labelling negative examples of improvement, which led to many being in fact positive examples (e.g., There has been improvement in symptoms [ILLNESS_FACTOR:improvement(NEGATIVE)]). This seems in fact to be a side-effect of GPT3.5\u2019s \u2018common sense\u2019, indeed all other negative classes indicate a positive outlook (e.g., no history of abuse or disorder, no substance use, no hospitalisations, etc.), thus when prompted for negative examples the LLM produced sentences coherent with the rest of the note, i.e., sentences mentioning improvement. The secondary lit up diagonal on the left of the confusion matrix confirms that the model struggles with negation, with a tendency to predict the positive counterpart for negative classes and vice versa (right-hand side diagonal). 12.2 Clinical data While performance of the span extraction model is well above chance, it still appears low given that we are testing the model within distribution (i.e., on the same synthetic data it was trained with) with the exFigure 2: Log-scaled confusion matrix (synthetic data). pectation of a drop in performance when taken outof-distribution (i.e., to EHR clinical data). Grounds for this concern are confirmed by a first test on a test set of 482 sentences from the above mentioned electronic health records annotated by a consultant psychiatrist for our target features, which yields quite a low overall performance (0.30 F1). This prompts us to further examine the synthetic training data and to perform in-depth error analysis to understand the reasons underlying the low performance both on the synthetic and on the clinical test sets. The analysis reveals the following issues: \u2022 Many spans are mislabelled across labels \u2022 Many spans tend to be repetitive/reuse the same words for a given label (e.g., \u2018family\u2019 for family disorder, \u2018abuse\u2019 for substance abuse, etc.) \u2022 The style and format of the synthetic notes (articulate, using possessives, articles, pronouns, auxiliary verbs, punctuation and connectors consistently) differs substantially from that of the real clinical data (telegraphic, with frequent ellipsis of articles, pronouns, verbs and connectors, deidentification placeholders, missing spaces and punctuation, inconsistent casing, etc.) which leads to lack of robustness in the model predictions (i.e., predictions changing when a pronoun is changed, period is removed, etc.) \u2022 In up to 30% of cases (from manually examining 100 spans), spans with negative labels are in fact positive spans, which means 1) the model is confused between negative and positive factors and fails to properly learn them and 2) many \u2018errors\u2019 in the synthetic test set are in fact correct predictions with incorrect ground truth labels To address these challenges, we thus perform the following changes: \u2022 We use heuristics to remove the bulk of wrongly labelled spans (by removing most spans for a given label which contain keywords from other labels, leaving a few to avoid overfitting) \u2022 For each label, we upsample the \u2018diverse\u2019 spans (i.e., spans that do not mention label words explicitly, e.g., spans that do include \u2018family\u2019 or \u2018disorder\u2019 for family disorder, or \u2018sustance\u2018 and \u2018abuse\u2019 for substance abuse) \u2022 We add \u2018noise\u2019 to the data by randomly removing possessives, articles, pronouns, auxiliary verbs and punctuation and replacing some pronouns with deidentification placeholder strings (\u201cFFFFF\u201d and \u201cXXXXX\u201d) that are used to pseudonymise our research EHR data \u2022 We switch to BERT-base-uncased to avoid casing issues \u2022 We remove the older age category (which is available as structured data in an EHR) and merge childhood abuse and abuse classes into a single abuse category (due to the scarcity of the latter class) \u2022 We remove the sentences with no labels from the original dataset and specifically prompt GPT3.5-turbo for 3000 clinical sentences which do not mention our target features. This is done both because the original unlabelled sentences had a high amount of noise (the plan would mention prognostic/risk factors or features but was not annotated by the language model) and to ensure more diversity in unlabelled sentences. The resulting dataset contains 55924 sentences. We retrain our model on this new synthetic data and obtain an average F1 of 0.75 on the synthetic test set (keeping in mind that since a percentage of test set examples are wrongly labelled, this does not reflect the real performance of the model). We then re-test the model on the annotated EHR sentences. 12.2.1 Results The results on the clinical data can be seen in Table 4. The overall performance is 0.60 F1, with manual perturbation analysis indicating that the model is much more robust to changes in style or format (e.g., removing pronouns or punctuation). The performance varies widely across classes, with some classes performing very well (0.95 F1 for recurrent episodes POSITIVE) while others show poor performance (0.19 F1 for non-adherence NEGATIVE). Performance for negative classes is again unsurprisingly worse than for positive classes. In general, recall is higher than precision, with a tendency for the model to overpredict factors. This is unsuprising given the high overlap in topic and broad range of our factors. Indeed, negative classes such as negative multiple antidepressants (e.g., She takes antidepressant X), negative anhedonia (e.g., She enjoys hobbies like painting and reading or negative abuse e.g., She likes spending time with her family might be virtually indistinguishable from NO_ANNOTATION spans, and arguably most sentences could be said to contain a span within our target factors if we include negative classes. We come back to this issue in the Discussion. To remedy the model\u2019s oversensitivity, we can increase the confidence threshold for predicting factors, that is only outputting factor predictions above 0.5 probability and no label otherwise. This also has the advantage that it ensures any output predictions are robust (i.e., made with high confidence rather than being \u2018lucky guesses\u2019), which is particularly desirable in medical settings. Finally, it increases recall of the NO_ANNOTATION class to 0.90. When this threshold is increased to 0.5, four high confidence classes emerge (abuse POSITIVE, family member mental disorder POSITIVE, severe illness POSITIVE and suicidality POSITIVE) which can be confidently predicted with 0.85 average F1 and 0.95 precision (see Table 5.) A significant factor contributing to the model\u2019s lower performance is the model confusing positive and negative classes (partially due to noise in the synthetic training data as mentioned previously). This is demonstrated by collapsing predictions across polarities, which increases average F1 to 0.70 (see Table 6). Given the goal of the model is to present clinicians with evidence from extracted spans for their consideration, the nonpolarised model could also have clinical utility. 13 Example extractions Here we show some examples of successful span extractions in synthetic test sentences with (start, end, label) for each extracted span: \u2022 Treatment History: patient already tried multiple antidepressant medications from different classes including SSRIs and SNRIs but did not experience significant improvement. spans: (0, 46, multiple antidepressants POSITIVE); (47, 74, improvement NEGATIVE) \u2022 XXXXX has been inpatient twice for mental health treatment due to severity of illness with recurrent episodes of major depressive disorder occurring approximately every 3-4 months. spans: (0, 81, multiple hospitalizations POSITIVE); (82, 175, recurrent episodes POSITIVE \u2022 FFFFF reports no family history of mental disorders and denies any history of abuse spans: (0, 51, family member mental disorder NEGATIVE); (52, 83, abuse NEGATIVE) \u2022 patient reports no significant physical comorbidities but mentions mild anxiety symptoms spans: (0, 53, physical comorbidity NEGATIVE); (54, 88, mental comorbidity POSITIVE) \u2022 has been on citalopram, fluoxetine and sertraline spans: (0, 57, multiple antidepressants POSITIVE) \u2022 patient recalls being severely bullied at school spans: (0, 48, abuse POSITIVE) \u2022 she battles with heroin addiction spans: (0, 33, substance abuse POSITIVE) \u2022 he denies intent to end his own life spans: (0, 36, suicidality NEGATIVE) \u2022 she did not suffer any neglect as a child spans: (0, 41, abuse NEGATIVE) \u2022 he talked about how his friend was bullied spans: (0, 42, NO ANNOTATION) \u2022 her brother was diagnosed with ptsd spans: (0, 35, family member mental disorder POSITIVE) 14 Discussion In the synthetic test dataset, we find that our custom span-level model which uses a variant of NonMaximum Suppression (NMS) outperforms the simpler token-level model which is standardly used for span extraction. Additionally, we find that the sentence-level model performs slightly under the span-level model overall. We hypothesise that this might be because the model learns to more specifically map labels with the relevant tokens, rather than relying on fuzzier learning over full sentences. Performance on positive classes is significantly higher than for negative classes. This is due to a Sentence-level model Span-level model Token-level model Class Precision Recall F1 Score Precision Recall F1 Score Precision Recall F1 Score older age POSITIVE 0.46 0.57 0.51 0.42 0.83 0.56 0.5 0.57 0.54 family member mental disorder POSITIVE 0.75 0.9 0.82 0.74 0.9 0.81 0.7 0.89 0.78 abuse POSITIVE 0.6 0.25 0.35 0.69 0.38 0.49 0.84 0.4 0.54 childhood abuse POSITIVE 0.77 0.87 0.81 0.73 0.89 0.8 0.78 0.85 0.81 long illness duration POSITIVE 0.72 0.76 0.74 0.79 0.64 0.7 0.7 0.69 0.69 severe illness POSITIVE 0.45 0.33 0.38 0.47 0.39 0.43 0.34 0.22 0.26 suicidality POSITIVE 0.61 0.64 0.63 0.69 0.61 0.65 0.48 0.63 0.55 multiple hospitalizations POSITIVE 0.8 0.84 0.82 0.76 0.88 0.81 0.66 0.75 0.7 recurrent episodes POSITIVE 0.63 0.62 0.63 0.65 0.73 0.69 0.55 0.65 0.59 improvement POSITIVE 0.58 0.64 0.61 0.55 0.57 0.56 0.47 0.58 0.52 physical comorbidity POSITIVE 0.72 0.76 0.74 0.64 0.83 0.73 0.72 0.8 0.76 mental comorbidity POSITIVE 0.81 0.75 0.78 0.78 0.72 0.74 0.81 0.72 0.76 substance abuse POSITIVE 0.74 0.64 0.68 0.71 0.83 0.77 0.67 0.78 0.72 anhedonia POSITIVE 0.58 0.67 0.62 0.62 0.8 0.7 0.59 0.7 0.64 illness early onset POSITIVE 0.65 0.63 0.64 0.67 0.75 0.7 0.61 0.55 0.58 multiple antidepressants POSITIVE 0.82 0.78 0.8 0.79 0.83 0.81 0.7 0.79 0.74 antidepressant dosage increase POSITIVE 0.73 0.86 0.79 0.82 0.78 0.8 0.68 0.65 0.66 multiple psychotherapies POSITIVE 0.64 0.75 0.69 0.73 0.72 0.72 0.66 0.8 0.72 side effects POSITIVE 0.81 0.71 0.76 0.74 0.74 0.74 0.67 0.68 0.68 non adherence POSITIVE 0.75 0.66 0.7 0.76 0.77 0.76 0.68 0.68 0.68 older age NEGATIVE 0.44 0.55 0.49 0.58 0.41 0.48 0.49 0.39 0.43 family member mental disorder NEGATIVE 0.71 0.62 0.66 0.72 0.71 0.72 0.62 0.5 0.55 abuse NEGATIVE 0.54 0.76 0.63 0.6 0.8 0.69 0.55 0.7 0.62 childhood abuse NEGATIVE 0.73 0.74 0.74 0.64 0.69 0.66 0.53 0.49 0.51 long illness duration NEGATIVE 0.49 0.4 0.44 0.58 0.43 0.5 0.42 0.35 0.38 severe illness NEGATIVE 0.37 0.31 0.34 0.62 0.41 0.49 0.31 0.41 0.35 suicidality NEGATIVE 0.68 0.85 0.75 0.61 0.77 0.68 0.64 0.71 0.67 multiple hospitalizations NEGATIVE 0.76 0.83 0.8 0.77 0.79 0.78 0.72 0.68 0.7 recurrent episodes NEGATIVE 0.75 0.48 0.59 0.76 0.49 0.6 0.58 0.39 0.46 improvement NEGATIVE 0.42 0.42 0.42 0.35 0.49 0.4 0.34 0.26 0.29 physical comorbidity NEGATIVE 0.66 0.71 0.68 0.67 0.68 0.68 0.59 0.61 0.6 mental comorbidity NEGATIVE 0.65 0.68 0.67 0.72 0.65 0.68 0.5 0.56 0.53 substance abuse NEGATIVE 0.7 0.91 0.79 0.69 0.89 0.78 0.62 0.75 0.68 anhedonia NEGATIVE 0.55 0.3 0.39 0.61 0.34 0.44 0.56 0.3 0.39 illness early onset NEGATIVE 0.7 0.57 0.63 0.75 0.53 0.62 0.53 0.5 0.51 multiple antidepressants NEGATIVE 0.32 0.2 0.25 0.59 0.37 0.45 0.37 0.3 0.33 antidepressant dosage increase NEGATIVE 0.56 0.57 0.56 0.54 0.59 0.56 0.53 0.41 0.46 multiple psychotherapies NEGATIVE 0.58 0.52 0.55 0.63 0.37 0.47 0.47 0.2 0.28 side effects NEGATIVE 0.58 0.67 0.62 0.58 0.64 0.61 0.56 0.37 0.44 non adherence NEGATIVE 0.39 0.43 0.41 0.37 0.44 0.4 0.33 0.42 0.37 NO ANNOTATION 0.81 0.79 0.8 0.84 0.77 0.8 0.8 0.81 0.81 POSITIVE 0.68 0.68 0.68 0.69 0.73 0.70 0.64 0.67 0.64 NEGATIVE 0.59 0.59 0.58 0.63 0.58 0.59 0.53 0.49 0.50 PATIENT 0.63 0.66 0.63 0.64 0.70 0.65 0.63 0.59 0.59 ILLNESS 0.64 0.63 0.63 0.66 0.64 0.64 0.57 0.59 0.57 TREATMENT 0.62 0.62 0.61 0.65 0.62 0.63 0.58 0.52 0.54 All 0.63 0.63 0.63 0.66 0.65 0.65 0.58 0.57 0.57 Table 3: Precision, recall and macro averaged F1 for each model and factor (synthetic data). best in bold Class Precision Recall F1 Score N NO ANNOTATION 0.59 0.34 0.43 99 anhedonia POSITIVE 0.76 0.59 0.67 27 antidepressant dosage increase POSITIVE 0.31 0.83 0.45 12 abuse POSITIVE 0.94 0.80 0.86 20 family member mental disorder POSITIVE 0.61 0.92 0.73 12 illness early onset POSITIVE 0.50 1.00 0.67 8 improvement POSITIVE 0.68 0.42 0.52 31 long illness duration POSITIVE 1.00 0.80 0.89 5 mental comorbidity POSITIVE 0.43 0.38 0.40 8 physical comorbidity POSITIVE 0.50 0.67 0.57 9 multiple antidepressants POSITIVE 0.75 0.62 0.68 29 multiple psychotherapies POSITIVE 0.38 0.38 0.38 13 non adherence POSITIVE 0.50 0.19 0.27 16 recurrent episodes POSITIVE 1.00 0.90 0.95 10 severe illness POSITIVE 0.55 1.00 0.71 6 side effects POSITIVE 0.79 0.44 0.57 34 substance abuse POSITIVE 0.71 0.62 0.67 13 suicidality POSITIVE 0.80 0.75 0.77 32 antidepressant dosage increase NEGATIVE 0.50 0.56 0.53 16 improvement NEGATIVE 0.60 0.16 0.25 19 multiple antidepressants NEGATIVE 0.20 1.00 0.33 3 multiple psychotherapies NEGATIVE 0.51 0.93 0.66 28 multiple hospitalizations NEGATIVE 0.75 1.00 0.86 3 non adherence NEGATIVE 0.12 0.40 0.19 5 severe illness NEGATIVE 0.64 1.00 0.78 7 side effects NEGATIVE 0.65 0.68 0.67 19 substance abuse NEGATIVE 0.33 0.33 0.33 3 suicidality NEGATIVE 0.78 0.78 0.78 9 POSITIVE 0.66 0.67 0.63 285 NEGATIVE 0.51 0.68 0.54 98 All 0.60 0.66 0.60 482 Table 4: Precision, recall and macro averaged F1 for each factor (clinical data). Classes with n <2 excluded. Class Precision Recall F1 Score abuse POSITIVE 1.00 0.80 0.89 family member mental disorder POSITIVE 0.85 0.92 0.88 severe illness POSITIVE 1.00 0.67 0.80 suicidality POSITIVE 0.96 0.75 0.84 All 0.95 0.78 0.85 Table 5: Precision, recall and macro averaged F1 for each factor (clinical data -high confidence classes with 0.5 confidence threshold). Class Precision Recall F1 Score NO ANNOTATION 0.59 0.34 0.43 anhedonia 0.75 0.86 0.80 antidepressant dosage increase 0.48 0.86 0.62 abuse 0.95 0.90 0.93 family member mental disorder 0.61 0.92 0.73 illness early onset 0.33 1.00 0.50 improvement 0.88 0.42 0.57 long illness duration 0.67 0.57 0.62 mental comorbidity 0.38 0.38 0.38 physical comorbidity 0.50 0.80 0.62 multiple antidepressants 0.79 0.97 0.87 multiple psychotherapies 0.64 1.00 0.78 multiple hospitalisations 0.75 0.75 0.75 non adherence 0.55 0.57 0.56 recurrent episodes 0.92 1.00 0.96 severe illness 0.59 1.00 0.74 side effects 0.87 0.64 0.74 substance abuse 0.88 0.79 0.83 suicidality 0.87 0.83 0.85 All 0.68 0.77 0.70 Table 6: Precision, recall and macro averaged F1 for each factor (clinical data -non-polarised). number of reasons. First, despite our additional prompting for negative factors exclusively, there is still an imbalance with more positive than negative labels. Second, it is a well-known fact that language models struggle with negation (Ettinger, 2020). Finally, negation will often be present in the sentence but not necessarily in each negated span, even if the scope of the negation encompasses the span, e.g., The patient denies suicidality [suicidality NEGATIVE] and substance abuse [substance abuse NEGATIVE], where the text of the second span does not contain an explicit negation. While each span\u2019s tokens contextual embeddings should have some signal which indicates there is a negation somewhere in the sentence, it might not be strong enough compared to classifying a span which contains an explicit negation token. While average F1 on real data is 0.60, our model trained exclusively on synthetic data already has practical clinical use, as it can be used out of the box with a confidence threshold of 0.5 to extract abuse, family disorder, severe illness and suicidality with 0.85 F1 and 0.95 precision, a performance comparable to Kulkarni et al. (2024), who used real data annotated manually in a costly and timeconsuming way to train a model to extract two clinical factors (suicidality and anhedonia). This is despite the high syntactic and semantic variability among spans expressing these factors. For example, our model is able to identify spans mentioning events as varied as emotional neglect, violence or bullying as abuse, and a wide range of combination of family member and various conditions (brother, aunt, schizophrenia, bipolar disorder, etc.) as a family history of mental disorder. Obtaining such high performance with a model trained on synthetic data only shows this is a promising direction of research for cost-efficient AI applications in healthcare. Indeed, the cost of producing the synthetic training data used in this study was under \u00a310, versus the thousands of pounds required for compensating expert annotators. We believe there are several reasons the model fails to achieve higher average performance across all factors. First, many classes are very close to one another, for example long duration and early onset, classes which mention antidepressants, physical comorbidities and side effects, etc. It is no coincidence that the model performs best on classes which are most distinctive (abuse, family disorder, suicidality). Secondly, many classes are highly subjective, e.g., early onset (how early?), long duration (how long?), substance abuse (how much consumption?) and spans are often ambivalent (e.g., \u2018some improvement then worsening\u2019; \u2018some side effects then none\u2019, etc.) Finally, many negative classes are not well defined or consistent, e.g, negative multiple psychotherapies (mentions only one therapy? any span mentioning therapy?), negative multiple antidepressants (any span mentioning a single antidepressant?), negative multiple hospitalisations? (having been hospitalised once? Never?), negative anhedonia (any span mentioning subject performing activities?), negative severe illness (moderate illness? mild symptoms?), etc. In view of these challenges, it is likely that training a model to extract a wide range of factors requires some real annotated clinical data, however impressively high performance can already be achieved on a subset of factors by exclusively training on synthetic data. 15 Future work The paradigm we used could be extended and scaled to other phenotypes with a different set of risk/prognostic factors or clinical features in the future. Future research could also investigate whether using a model which has been domain pretrained on mental health data (such as MentalBERT, (Ji et al., 2021) would improve performance. Works such as (Guevara et al., 2024) suggest that a sequence-to-sequence model such as T5 could achieve even better performance than a BERT-based classifier model. Using GPT4 instead of GPT3.5 could help generate synthetic data with reduced noise and more accurate labelling of negative factors. Finally, an optimal weighting scheme for the extracted factors which would best allow identification of difficult-to-treat depression could be developed in consultation with clinicians. 16 Conclusion The goal of this study was to train a model to extract spans which contain factors associated with the syndrome of difficult-to-treat depression. To achieve this, we generated annotated synthetic clinical notes with both positive and negative factors of interest using a Large Language Model (GPT3.5turbo) and subsequently trained various BERTbased classifier models (sentence, token and span level) to extract factors. We show it is possible to obtain good performance on real clinical data on a set of as many of 20 different factors, and high performance on a subset of clinically-relevant factors by training exclusively on LLM-generated synthetic data. 17 Acknowledgements I.L., A.K. and D.W.J. were supported in part by the NIHR AI Award for Health and Social Care (AI-AWARD02183), A.K. by a research grant from GlaxoSmithKline. The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR or the UK Department of Health. This study was supported by CRIS Powered by Akrivia Health, using data, systems and support from the NIHR Oxford Health Biomedical Research Centre (BRC-1215-20005) Research Informatics Team. We would also like to acknowledge the work and support of the Oxford Research Informatics Team: Tanya Smith, Research Informatics Manager, Adam Pill, Suzanne Fisher, Research Informatics Systems Analysts and Lulu Kane Research Informatics Administrator.", "introduction": "Major depressive disorder (MDD) is highly preva- lent with a heavy economic, social and personal \u2217Department of Psychiatry, University of Oxford, UK \u2020Civic Health Innovation Lab and Institute of Population Health, University of Liverpool, UK \u2021Mersey Care NHS Foundation Trust, UK \u00a7Oxford Health NHS Foundation Trust, Warneford Hos- pital, UK \u00b6Oxford Precision Psychiatry Lab, NIHR Oxford Health Biomedical Research Centre, UK burden of disability worldwide and affecting up to 6-12% of the adult global population, a propor- tion which has been rising in the last few years (Santomauro et al., 2021). The DSM-V (Diagnos- tic and Statistical Manual of Mental Disorders V) operationalises MDD as a continuous period of at least two weeks characterised by a change in mood leading to loss of interest or pleasure in activities, along with other symptoms such as weight loss or gain, sleep or cognitive issues and suicidal thoughts causing substantial distress or impairment to func- tioning (American Psychiatric Association et al., 2013). Depressive disorders are conditions with sub- optimal treatment outcomes, with up to 70% of patients failing to achieve remission after receiv- ing pharmacological treatment (Caldiroli et al., 2021). This led to the development of the con- cept of treatment-resistant depression (TRD) and although definitions vary (Brown et al., 2019) a common point of consensus is the failure to achieve treatment response after sequential, adequate- duration and minimally-effective dosed trials of two antidepressant-class medications. Designating a patient as having TRD focuses on acute-phase symptom improvement in response to pharmaco- logical intervention. Despite these efforts at refining the definition of depression to address treatment responsiveness, the cumulative rate of chronicity and lack of re- sponse or remission amongst MDD patients re- mains high, with around 30% of patients not achiev- ing remission even after four courses of antidepres- sants (Rush et al., 2006). In addition, treatment resistant depression may be associated with higher risks of suicide (Papakostas et al., 2003). This underlines the importance of improving the iden- tification of the relevant features (or signature) of people with depression where treatment has not provided adequate remission. The relatively new concept of difficult-to-treat 1 arXiv:2402.07645v1 [cs.CL] 12 Feb 2024 depression is proposed McAllister-Williams et al. (2020) as a more comprehensive model that empha- sises biomedical, psychological and social factors and interventions that may influence response to treatment beyond the acute symptomatic response to pharmacological interventions in TRD." }, { "url": "http://arxiv.org/abs/2305.16426v2", "title": "Not wacky vs. definitely wacky: A study of scalar adverbs in pretrained language models", "abstract": "Vector space models of word meaning all share the assumption that words\noccurring in similar contexts have similar meanings. In such models, words that\nare similar in their topical associations but differ in their logical force\ntend to emerge as semantically close, creating well-known challenges for NLP\napplications that involve logical reasoning. Modern pretrained language models,\nsuch as BERT, RoBERTa and GPT-3 hold the promise of performing better on\nlogical tasks than classic static word embeddings. However, reports are mixed\nabout their success. In the current paper, we advance this discussion through a\nsystematic study of scalar adverbs, an under-explored class of words with\nstrong logical force. Using three different tasks, involving both naturalistic\nsocial media data and constructed examples, we investigate the extent to which\nBERT, RoBERTa, GPT-2 and GPT-3 exhibit general, human-like, knowledge of these\ncommon words. We ask: 1) Do the models distinguish amongst the three semantic\ncategories of MODALITY, FREQUENCY and DEGREE? 2) Do they have implicit\nrepresentations of full scales from maximally negative to maximally positive?\n3) How do word frequency and contextual factors impact model performance? We\nfind that despite capturing some aspects of logical meaning, the models fall\nfar short of human performance.", "authors": "Isabelle Lorge, Janet Pierrehumbert", "published": "2023-05-25", "updated": "2023-10-22", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "main_content": "We begin by introducing some concepts from linguistic semantics and pragmatics that motivate our study. 2.1 Scales and operators The workhorse of document retrieval is the fact that the topic under discussion hugely influences what entities people refer to. According to semantic theory, individual unique entities are referred by proper nouns; common nouns refer to sets of entities. The bursts in uses of proper and common nouns associated with the current topic provide the basis for the distributional hypothesis about word meanings. In contrast to nouns, many other types of words have more complicated semantic structures. Partee (1992) develops a typology of word types according what implicit variables they contain. In Altmann et al. (2009), this typology is simplified and applied to explain why some types of words are typically much less bursty than nouns. Of particular relevance here is the distinction between entities and operators. Operators are words that have hidden variables in their semantic representations, which are supplied by the context. The many different ways of filling in these hidden variables means that they can be used in many different contexts. As a corollary, Altmann et al. (2009) demonstrate that they are much less bursty than words referring to entities. One much-studied class of operators is gradable adjectives such as hot or tall. These position the expression they modify on a scale. By using them, the speaker indicates that the modified expression has a position on the scale that is more extreme than a given threshold, which is inferred from the context (Lassiter and Goodman, 2013). For example, by hearing someone described as tall, or water described as hot, the listener will apply their knowledge about people\u2019s heights, or water temperatures, to infer that the height or temperature being described is significantly above its typical value. This means that formal semantic representations of tall, hot each contain a hidden variable, representing the threshold whose value is contextually determined. For negative adjectives such as short, cold, the corresponding inference is that the value falls below a critical threshold. The adverbs in our study themselves modify scalar adjectives, introducing a further level of abstraction. Consider the following sentences: 1. The water is VERY hot (DEGREE) 2. The water is OFTEN hot. (FREQUENCY) 3. The water is PROBABLY hot. (MODALITY) Adverbs of DEGREE simply move the degree threshold of the original adjective (Bennett and Goodman, 2018), i.e., very hot water has an inferred range of temperatures higher than hot water. Adverbs of FREQUENCY, on the other hand, do not act on the (continuous) intensity of a single event, but rather describe a point on a scale of discrete occurrences of the relevant property (Doetjes, 2007). Lastly, modal or epistemic adverbs do not modify the adjectival property itself, but instead are an evaluation of the likelihood of the property by the speaker (Lang, 1979). Because they pertain to different scales, these categories can be freely combined without contradiction; e.g a person may be often slightly angry, certainly slightly angry, occasionally very angry, or sometimes definitely angry. Thanks to their hidden, contextually determined variables, operators are freely available in a wide variety of contexts. It follows that the context provides little information about which operator was selected in any instance. While always and never differ greatly in their logical force, they may differ little in their contexts of use. It follows that the particular context may provide little information about which one the speaker actually selected. 2.2 Entailment One of the main tasks used to evaluate natural language inference is an entailment task. We accordingly define an entailment task to probe how well LLMs reason about scalar adverbs. For a typical entailment task, (e.g., MNLI, Williams et al., 2018) crowdworkers are asked to label the relations between two ordered sentences as entailment, neutral or contradiction. Thus the NLP literature considers sentence A to entail sentence B if B can be normally inferred from A. This is a loose definition in relation to the research literature in linguistics, which has since Grice (1975) drawn a critical distinction between entailment (a semantic concept based on the logical meanings of words) and implicature (Huang, 2011). Implicatures are inferences made in the context of the discourse, including relevant real world knowledge, which do not meet the strict criteria for logical entailment. Unlike entailments they are readily cancelled without engendering contradictions. For example, the MNLI dataset characterizes ... people began to form a line ... as \u201centailing\u201d ... people formed a line .... However, initiating an action does not necessarily mean that the action was completed; something might happen to prevent this. Pervasive confusion between entailment and implicature in the NLP literature means that levels of success on entailment tasks may be inflated due by common associations of events, rather than logical reasoning. Here, we confine our attention to entailment in the strict logical sense. A critical observation is that entailment may only be strictly defined over word relationships that involve the same scale. For example, if it is very cold, it is at least somewhat cold but if it is very cold, it is unclear whether it is at least often cold. Previous work indicates that pretrained language models struggle with entailment relations, such as hypernymy and hyponymy (Guerin and Chemla, 2023). 3 Materials Building on the approach of Ribeiro et al. (2020) and R\u00f6ttger et al. (2021), we had the goal of designing a balanced diagnostic dataset for probing how well models capture the meanings of scalar adverbs. Our primary dataset consists of 960 items, which are based on posts from the year 2015 in the Reddit politosphere dataset introduced in Hofmann et al. (2022). This slice represents about 6GB of data from a range of political subreddits (e.g., r/conservative or r/anarchist). It offers naturalness and domain consistency as well as a certain amount of diversity in linguistic usage. To select the posts, we first used SpaCy (Honnibal and Montani, 2017) to extract phrases of the form \u2018ADV ADJ.\u2019 where there is a dependency between the adjective and the adverb. We take only phrases in which this construction occurs in final position, so that the phrases are also guaranteed to be well-formed in isolation. We then include the previous context from the same post, up to a maximum of 40 words, cutting at a sentence boundary. In semantic theory, preceding context is known to be important (Beaver, 2001; Roberts, 1995), but this factor is unfortunately disregarded in most of the related NLP research. Aiming for at least 40 different adjectives to occur with each target adverb, we selected 8 distinct adverbs that expressed the speaker\u2019s judgment on a scale of likelihood (MODALITY), 8 that express a position on a temporal scale (FREQUENCY), and 8 with more general applicability (DEGREE). The adverbs were selected to span the full range of each scale, and hence include adverbs with negative force i.e., hardly and never. To shed light on the contrast between scalar adverbs and outright negation, the word not is reserved as a benchmark and not used as a target. Both the adverbs themselves, and the ADV ADJ bigrams, were selected to span the range of available frequencies to the extent possible, using Google Ngram (Lin et al., 2012) frequencies. The target adverbs are listed in Target Context certainly You are conflating the issue. Slavery was not moral but it was [MASK] legal. frequently Doesn\u2019t really matter what republicans say, democrats are going to call them racist. Because what Republicans say is [MASK] racist. very You need verifiable proof. I mean, it\u2019s not like saying you\u2019re self trained is [MASK] reputable. Table 2: Example target phrases and sentences for MLM task Table 1. According to Paradis (1998), some of our chosen scalar adverbs are \u2018maximizers\u2019 (e.g., completely) which tend to occur with extreme adjectives (e.g., freezing) or limit adjectives (e.g., dead). Others typically combine with stereotypically scalar adjectives (e.g., cold). However, these are tendencies rather than rules (Kamoen et al., 2011). Indeed, phrases such as very dead or completely cold can be perfectly acceptable in some contexts (e.g., I can assure you he was very dead). Therefore, we do not restrict our phrases to traditional scalar adjectives and include any occurrences involving the target adverbs. Example items and targets can be seen in Table 2. The dataset features (word lengths and adjective overlap between adverbs) can be found in Table 11 in the Appendix. These items were used as such in an MLM task. The same target adverbs are also used in the entailment task, but constructing items with templates rather than using the natural contexts. For the adverb ranking task, we combined each target scalar adverb with (the same) 40 common adjectives to try and get an average of generic embeddings: able, bad, big, black, clear, different, early, easy, economic, federal, free, full, good, hard, high, human, important, international, large, late, little, local, low, military, national, new, old, only, political, possible, public, real, recent, right, small, social, special, strong, white, young. We describe the construction of the entailment items below in 4.4. 4 Tasks Similar to Talmor et al. (2020), Liu et al. (2021) and Jiang et al. (2022), our main tasks are zero-shot evaluations without fine-tuning so as to examine the representations learned from pretraining. We first evaluate the extent to which the rankings in Table 1 can be recovered from the embedding space in BERT and RoBERTa. We then look at MLM (as one of the training objectives for the models) and finally entailment (as a canonical logical task). In the Appendix, we also consider a model fine-tuned on a Natural Language Inference dataset (MNLI, Williams et al., 2018). 4.1 Ranking adverbs by scalar position Our first question is whether the rankings of the various scalar adverbs along their relevant scales are observable in the embeddings. Resources that provide scalar rankings for adverbs are scarce, and the few available, such as Taboada et al. (2011), confound scalar position with other factors. Therefore, we defined our own gold standard (cf. Table 1), on the basis of WordNet definitions. We first applied both methods described in Soler and Apidianaki (2020) for assessing the scalar position of scalar adjectives. Their first method (SIM) uses a reference point, specifically the top end of each scale, and computes the cosine similarity for each target from the reference point; the similarity should decrease as we move down the scale. Their second method (DIFF) uses the difference between between the maximum and minimum words on a scale to define an abstract vector of scale position; the cosine similarity of any word to this vector is taken to indicate its scale position. Broadly inspired by Maillard and Clark (2015) and Socher et al. (2012)\u2019s work on semantic composition for nouns, we also devised a third method (AdjDIFF). Reasoning that the effect of the scalar adverb on the contextual embedding of the adjective may correlate with the scalar adverbs\u2019 position on the scale, we obtain embeddings for each adjective with and without the scalar adverb. We subtract the unmodified embedding from the modified embedding of the adjective to obtain an estimate of the vector for the scalar adverb. We then take the cosine similarity of each resulting vector with the same referent vector as in the DIFF method and average them to obtain the final cosine similarity value. rankAdjDiff = cos(\u20d7 vadj(+adv) \u2212\u20d7 vadj, \u20d7 vtop \u2212\u20d7 vend) (1) The results for the AdjDiff method, which was overall the best performing, can be found in Table 3. We report the pairwise accuracy, Spearman \u03c1 and tie corrected Kendall\u2019s \u03c4 for RoBERTa, BERT large and BERT base. (See the Appendix for the results using the Soler and Apidianaki (2020) methods). Overall, the performance is worse than what Soler and Apidianaki (2020) obtained for adjectival half-scales. Adverb ranking may be more difficult than adjective ranking and/or full scales may be difficult to tank than half scales. However the overall accuracy of 0.64 for the BERT-large method indicates that some information about the relations has been captured. It is interesting that BERT-large performs better than RoBERTa, for which the existence of negative values of Spearman \u03c1 and Kendall\u2019s \u03c4 is particularly problematic. We also note that the FREQUENCY category shows the worst performance. 4.2 Masked Language Modelling MLM is one of two training objectives for BERT, and the only objective for RoBERTa. For BERT or RoBERTa to form good representations of the scalar adverbs, the larger context needs to contain information about which ones are most likely in any given instance. According, we directly evaluate the raw Masked Language Modelling outputs for the target phrases we selected. How well does each model predict a scalar adverb in a context when it is masked? Based on the discussion in Section 2, we expect this task to be extremely difficult. For most of our examples, it appears that humans would be unlikely to succeed in predicting the masked word. However, through learned attention weights, BERT and RoBERTa integrate information over a large time window, potentially performing better than expected. Possible sources of information about the scalar adverb include collocations or selectional restrictions involving the following adjective, and rhetorical devices or idiomatic expressions that involve the preceding context. Hence, we systematically explore the success of MLM in predicting a masked adverb. If MLM is successful, that means that the predictive information is present in a way that is not intuitively evident, whereas if MLM fails, that tends to suggest that predictive information is simply lacking in the text stream. In light of the difficulty of the task, we report two measures. One is the Mean Reciprocal Rank (MRR) for the original adverb, which scores high if the adverb that occurred is ranked highly even if it is not the one that actually appeared 1 . MRR is defined as MRR = 1 N N X n=1 1 rankadv (2) Where N is the number of items for the original target adverb and rankadv is the rank of the original target adverb among the model\u2019s predictions. Our other metric is whether the model ranked the original adverb above not. In our materials, replacement with not is always syntactically possible, but would either contradict the previous context, or contradict what the speaker actually said. Thus, not should generally be ranked as less likely than any scalar adverb, with the exception of other negative polarity items such as hardly and never2. We test three models in the BERT family: BERT base, BERT large and RoBERTa large. We also test GPT-2 (Radford et al., 2019), which unlike BERT family models is an autoregressive model and helps us to see whether the right-hand context contains relevant information. We use the pretrained cased BERT large and BERT base from the Huggingface\u2019s transformers library (Wolf et al., 2019), replacing our target scalar adverb with the [MASK] token and obtaining the logits which we convert back into probabilities. We use neutralised versions of the sentences e.g., \u2018is ADV ADJ.\u2019 as a baseline for predictions. This provides the BERT-family models with a syntactic cue plus any selectional biases from the ADJ. 4.3 MLM Results Results can be found in Table 4. All models perform extremely poorly in the neutral context, indicating that adjectives alone are not sufficient to predict adverbs. (The GPT2-neutral condition of course has no success, since GPT2 does not use 1Unlike Truong et al. (2023), we do not use Weighted Hit Rate (WHR) since this requires a fixed set of wrong predictions. 2Since we do not have precise numbers of negation acceptability for our examples (a difficult issue in naturalistic contexts which is beyond the scope of this article, see Limitations), this metric should be regarded as indicative only, in contrast to MRR. Pacc. Spearman \u03c1 Kendall \u03c4 BERT-b 0.60 f: 0.68 m: 0.77 d: 0.32 f: 0.52 m: 0.66 d: 0.23 BERT-l 0.64 f: 0.78 m: 0.88 d: 0.39 f: 0.62 m: 0.77 d: 0.24 ROBERTA 0.53 f: -0.32 m: 0.77 d: 0.64 f: -0.24 m: 0.67 d: 0.52 Table 3: Results of scalar ranking tests BERT-b = BERT base; BERT-l = BERT large; ROBERTA; f = FREQUENCY, m = MODALITY. d = DEGREE). right-hand context). The results for the full context are better. Both BERT large and BERT base get a significant boost from the full context both in upranking the original adverb (MRR doubling for both models) and in ranking the original adverb above negation. RoBERTa performs best overall. The MODALITY category gets the highest boost from context, from 0.02 to 0.57, this may be due in part to the fact that English lacks a negative item in this category. Error analysis shows that BERT still yields negation as the top prediction or among the top predictions even in cases where the context makes it unlikely (eg., not is the top prediction for the first two examples in Table 2). We construct confusion matrices between original target adverb and the top output prediction for each example for each model and context. We select the first of our target adverbs in the top 10 outputs, or the category \u2018other\u2019 when none of the top 10 predictions appears in our list. The heatmaps with context can be seen in Figure 1 (the full set of heatmaps, including outcomes for the neutral context, can be found in Figure 2 in the Appendix). While there is some indication of ability to predict the original adverb from BERT (i.e., the faintly lit up diagonal), it is clear that the decision is strongly driven by prior frequency effects, with not and very topping the predictions for all targets. RoBERTa gets a better performance, as is evidenced by the more strongly lit up diagonal, but frequency effects still dominate (the vertical lines for items such as very). The figure is laid out so that within-category confusions would show up on a block diagonal pattern. No such pattern exists, indicating that scalar adverbs within the same semantic category do not emerge as particularly similar. The fact that the models enjoy some success when provided with the left-hand context of the target indicates that the left-hand context \u2013 unlike the right-hand context \u2013 contains information about which scalar adverb is more likely in which instance. However, because of the naturalistic and varied nature of our examples, it is uncertain to what extent this success derives from distributional patterns versus logical relationships. 4.4 Scalar entailment task The adverb rankings in Table 1 can readily be translated into entailments. Evaluating putative entailments is an established test of logical reasoning: It is always cold entails It is sometimes cold, but the reverse is not true. If the models have reliably learned the logical relations between scalar adverbs during pretraining, they should rank completions which create correct entailments higher than completions which create contradictions. We set up an entailment task as a MLM task in two conditions, which we illustrate now with example items constructed from the ADV ADJ combination often special. For the BELOW condition, we create items where we expect an adverb which is below the premise adverb on the relevant scale, e.g. If it is often special, then it is at least [MASK] special (sometimes, occasionally, etc.). For the ABOVE condition, we expect a completion which is above the premise adverb, e.g., If it is [MASK] special, then it is at least sometimes special (often, usually, etc.). We craft items using eight different templates for each condition (varying order of premise and mask as well as subordinating conjunction). These can be found in the Appendix along with detailed results for each template. We omit the scalar adverbs which are technically negations (hardly, never) and omit bottom scalar adverbs (sometimes, maybe, slightly) for the BELOW condition and top scalar adverbs (always, definitely, completely) for the ABOVE condition, since no correct answer is available for these. In contrast to the natural items for the MLM task, not is always a logically impossible completion for all items in the entailment task. We used 160 adjectives systematically varied in frequency. From our Reddit data, we selected adjectives in the low frequency range (log frequency -18 to -14), medium frequency range (log frequency -14 to -10) and high frequency range (log frequency -10 BERTb(c) BERTb(n) BERTl(c) BERTl(n) RoBl(c) RoBl(n) GPT2(c) GPT2(n) ac. r ac. r ac. r ac. r ac. r ac. r ac. r ac. r FREQ. .22 .11 .02 .04 .36 .15 .06 .05 .52 .21 .06 .04 .08 .04 .00 .01 MOD. .19 .09 .00 .02 .33 .11 .01 .02 .57 .18 .02 .02 .09 .05 .00 .01 DEG. .4 .2 .08 .10 .53 .24 .22 .15 .67 .28 .29 .16 .16 .06 .00 .01 avg. .27 .14 .03 .05 .41 .17 .17 .07 .59 .22 .12 .07 .11 .05 .00 .01 Table 4: Accuracies (ac.), i.e., number of times the original adverb was ranked above not and Mean Reciprocal Rank (r) for each adverb and semantic category. (c) = full context; (n) = neutral context. (a) BERT large with context (b) RoBERTa large with context Figure 1: Heatmap of confusion matrices per scalar adverb for each model with context (items are grouped by semantic category). In the interest of space considerations, we only show results for BERT large and RoBERTa to -6). We use the recent wordfreq Python library for this purpose (Speer, 2022), which is sourced from the Exquisite Corpus project and compiles multilingual data from 8 domains. In addition, we selected 40 pseudo words as adjectives from the highest ranked items in (Needle et al., 2022) under the constraint that they are not compounds of real words, so that the pretrained models\u2019 WordPiece tokenization does not introduce any previous information. We combine these 160 adjectives with our target adverbs for each template and condition to create a dataset of 40960 sentences for which we collect MLM completions from BERT large and RoBERTa. 4.5 Entailment results As in the MLM task, to construct confusion matrices we select the first answer on our adverbs list from the top 10 completions of the model (including negative items: not, hardly and never), and the category \u2018other\u2019 if no completion in the top 10 is found in the relevant category. To calculate accuracies, we exclude trivial answers (e.g., If it is sometimes strong, then it is sometimes strong) as well as \u2018other\u2019 answers which do not pertain to the target semantic category (e.g., mostly when the category is temporal). However, we do report trivial answer percentages separately. A model which randomly picks an adverb in the relevant category produces 0.13 trivial answers. The results when taking into account negative answers are very poor. The models output a high percentage of negations (especially \u2018not\u2019) even though negations constitute logical contradictions for all sentences in the entailment dataset. To get a more nuanced picture of the models\u2019 behaviour and support comparisons with the MLM results, we also build heatmaps without taking negative answers into account, and calculate accuracies without negations. Results (with and without negations) can be found in Table 5. More detailed results by adjective frequency can be found in Table 8 in the Appendix. Both sets of heatmaps (including negative answers) can also be found in Figures 3 and 4 in the Appendix. When choosing the top relevant answer excluding negations the results improve drastically to near ceiling. However, the models most likely benefit from biases in both conditions. In the ABOVE condition, the most frequent items (always, actuBERTl (acc.) BERTl (triv.) RoBERTa (acc.) RoBERTa (triv.) with negation 0.35 0.20 0.42 0.17 without negation 0.88 0.33 0.88 0.25 BELOW 0.53 0.25 0.60 0.21 ABOVE 0.69 0.28 0.71 0.14 Table 5: Results for scalar entailment dataset (BERT large and RobERTa). (without negation) = not taking into account negations as answers, acc. = accuracy, triv. = number of trivial answers (e.g., If it\u2019s sometimes ADJ, it\u2019s sometimes ADJ, which we do not take into account for accuracies), BELOW = expects item below on the relevant scale, ABOVE = expects item above on the relevant scale (best in bold). ally, very) constitute correct answers in a majority of cases. In the BELOW condition all templates have a textual hint (at least/at most) which strongly suggests an item outside the top of the scale (?at least/at most always/completely/definitely). The benefits from the bias towards high frequency topof-scale adverbs in the ABOVE condition are probably stronger than those from the textual hints in the BELOW condition, which may explain why the models perform worse in the BELOW condition. The adjective frequency has little effect on performance, which is in fact slightly better for low frequency adjectives and pseudowords. This provides further evidence that the models are not memorizing ADV-ADJ combinations. This observation is strengthened by adverb frequency effects which prevail across scales (i.e., vertical lines in the heatmaps e.g., \u2018slightly\u2019, \u2018really\u2019) in both BERT large and RoBERTa. The rate of trivial answers in both models also appears to be far above what would be expected from humans (although this remains to be tested by collecting human judgments). To summarize, both BERT large and RoBERTa show very poor ability to distinguish between nonnegative scalar adverbs and negation. The models perform well if we consider first completions excluding negations. However they most likely benefit from frequency biases and it is doubtful whether they learned a separate logical representation of the adverbs\u2019 scalar property. The models also output a high number of trivial, uninformative completions and seem affected by noise associated with frequent adjectives. Finally, differing performance on the ABOVE and BELOW conditions, which are logically equivalent, indicates that neither model has a general grasp of the underlying logic. 5 Conclusion The goal of this paper was to examine how well pretrained language models such as BERT and RoBERTa represent and process full scales of scalar adverbs in the absence of any specific task fine-tuning. We used naturalistic data from Reddit and also constructed sentences in order to explore the language models\u2019 ability to predict different types of scalar adverbs in context, and to distinguish them from negation. The models achieved some success when a left context of up to 40 words was available. However, we note many shortcomings: weak differentiation amongst the semantic classes of adverbs, poor ability to discriminate scalar adverbs from negations even in contexts where a negation would create a contradiction, strong effects of adverb frequencies and lack of generalisation across two logically equivalent entailment constructions. 6 Limitations Scale: While our list of adverbs was carefully curated to include different semantic categories, full scales (including negations) and downsizer adverbs (e.g., slightly) unlike in previous works, they are a restricted sample of only 24 adverbs. While we do believe this is a representative list, it is by no means an exhaustive one and the conclusions drawn in this paper have to be confined to the semantic categories explored. Thus we cannot exclude the possibility that experiments using a larger list of adverbs would produce different results. Acceptability of negation: For some of our natural stimuli, the use of not would be infelicitous or illogical. For others, not would be possible but would express a meaning that contradicts what the speaker chose to express. Without a large-scale exploration of alternative contextualizations of the items, it is difficult to separate these cases. For example, for item 498, a substitution of not would appear to be rather infelicitous; however, by imagining further context we can see that this substitution would not be impossible. 1. Including the bill itself, I think you\u2019ll be looking at around 600 A4 pages of reading. And a lot of it is really dry. 2. Including the bill itself, I think you\u2019ll be looking at around 600 A4 pages of reading. And a lot of it is not dry. 3. Including the bill itself, I think you\u2019ll be looking at around 600 A4 pages of reading. And a lot of it is not dry. Your week might be more thrilling than you expect. Models: While we tried to explore the predictions from different types of pretrained models (i.e., GPT and BERT), we acknowledge that we did not run an extensive study of models from different families. This is in part because these are the most commonly used models in applications, but also because our study is qualitative and we were mostly interested in comparing the models\u2019 outputs with and without context and comparing performance between our semantic categories, rather than between different models. We also wished to focus on open-source models for which we could extract embeddings to explore a potential subspace for scalar properties. Gold standard: There are few resources for providing gold standard labels of position on the scale for scalar adverbs in general, and especially when including different semantic categories and downsisers as well as maximisers. This limited our available choices for scalar adverbs to investigate. We provided the gold standard labels for the list of adverbs, based on information in WordNet and claims in the research literature, excluding adverbs whose semantics appeared unclear. While these rankings are informed by our best knowledge of semantics as experienced linguists, they were provided by a few researchers rather than by gathering judgments from many crowdsourcers as in other studies. 7 Acknowledgements This study was supported by EPSRC Grant EP/W037211/1. We are grateful to Valentin Hofmann, Fangru Lin, and Paul R\u00f6ttger for useful comments on a previous draft.", "introduction": "Large pretrained language models such as BERT (Devlin et al., 2018) all rely on the assumption that the meanings of words are revealed by the company they keep (Harris et al., 1954); Masked Language Modelling (MLM) is a useful training objective be- cause words in the context of the masked word pro- vide cues to its identity. This assumption is highly appropriate for nouns and named entities. Differ- ent topics of discussion involve different entities, and in discussing any given topic, the associated entities will be referred to repeatedly. However, the assumption holds less well for some other classes of words that can be used in discussing virtually any topic. These include quantifiers (e.g. few, many, all), words expressing negation (e.g. not, no, never) and the focus of the present paper, which is scalar adverbs, such as perhaps, certainly, never, often, very, completely. Many scalar adverbs tend to oc- cur in similar contexts but have distinct meanings. As Abrusan et al. (2018) put it, distributional mod- els tend to be \u2018blind\u2019 to logical meanings because the latter are topic independent and thus their mean- ings tend to not be reflected in their distributional contexts. Accurate processing of scalar adverbs is perti- nent to a wide variety of NLP applications, includ- ing sentiment analysis (De Melo and Bansal, 2013; Ruppenhofer et al., 2014), entailment inferences (McNally, 2017), detecting contradictions, and in- direct Question Answering (De Marneffe et al., 2010). BERT family models succeed without fine- tuning on a remarkable variety of tasks, and unsu- pervised embeddings from BERT base have been used fairly successfully to rank gradable adjectives according to their scalar position on half-scales from a neutral to an extreme value (e.g., warm < hot < scalding) (Soler and Apidianaki, 2020). This result suggests that the BERT architecture might also be successful in exploiting diffuse or indirect cues for the selection of scalar adverbs. However, the same models have had little success in repre- senting negation (Ettinger, 2020) or antonymy (Tal- mor et al., 2020). In fact, even large language mod- els (LLMs) still struggle substantially with nega- tion (Truong et al., 2023). Soler and Apidianaki (2020) do not evaluate adjectival antonyms, such as {cold, hot}. However, it is worth noting that scalar adverbs intrinsically include antonym pairs, (e.g.,{never, always}). Furthermore, the word not is syntactically possible wherever a scalar adverb can occur, yet would express a contradictory mean- ing to any positive scalar adverb. These latter observations suggest that scalar ad- arXiv:2305.16426v2 [cs.CL] 22 Oct 2023 Category Adverbs MODALITY (14.8%) {maybe, perhaps, possibly}, arguably, probably, actually, certainly, definitely FREQUENCY (5.3%) never, occasionally, some- times, often, generally, usu- ally, frequently, always DEGREE (46.8%) hardly, slightly, basically, pretty, quite, very, really, com- pletely Table 1: Scalar adverbs in each semantic category ranked by scalar position using semantic theory and WordNet definitions. Bracketed items are tied. Per- centages for each category are the overall percentage of that category in the Reddit slice in relation to the set containing the target adverbs and not. verbs might present important challenges for LLMs, and point to the need for a deep assessment. We look at full scales, and directly compare scalar adverbs to explicit negation. We consider the 24 adverbs in Table 1, selected through the process described in Section 3. These vary widely in fre- quency, with very found 22818 times in the Reddit slice and frequently found only 52 times. At 40986 occurrences, not is more frequent than any scalar adverb (see Appendix for full adverb frequencies). We ask the following questions: 1) Are pre- trained language models able to distinguish be- tween different semantic categories of scalar ad- verbs? 2) Do they have general representations of full scales for adverbs, from maximally negative to maximally positive? 3) Do their representations support success in three tasks: ranking, MLM, and evaluating entailments? 4) In what way do the pat- terns of success and failure relate to word frequency and contextual factors?" } ], "Andrey Kormilitzin": [ { "url": "http://arxiv.org/abs/2010.08433v2", "title": "An efficient representation of chronological events in medical texts", "abstract": "In this work we addressed the problem of capturing sequential information\ncontained in longitudinal electronic health records (EHRs). Clinical notes,\nwhich is a particular type of EHR data, are a rich source of information and\npractitioners often develop clever solutions how to maximise the sequential\ninformation contained in free-texts. We proposed a systematic methodology for\nlearning from chronological events available in clinical notes. The proposed\nmethodological {\\it path signature} framework creates a non-parametric\nhierarchical representation of sequential events of any type and can be used as\nfeatures for downstream statistical learning tasks. The methodology was\ndeveloped and externally validated using the largest in the UK secondary care\nmental health EHR data on a specific task of predicting survival risk of\npatients diagnosed with Alzheimer's disease. The signature-based model was\ncompared to a common survival random forest model. Our results showed a\n15.4$\\%$ increase of risk prediction AUC at the time point of 20 months after\nthe first admission to a specialist memory clinic and the signature method\noutperformed the baseline mixed-effects model by 13.2 $\\%$.", "authors": "Andrey Kormilitzin, Nemanja Vaci, Qiang Liu, Hao Ni, Goran Nenadic, Alejo Nevado-Holgado", "published": "2020-10-16", "updated": "2020-10-24", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.IR" ], "main_content": "2.1 Data The data in this study were sourced from the UK-Clinical Record Interactive Search system (UK-CRIS), which provides a research platform (https://crisnetwork.co/) for data mining and analysis using de-identified real-world observational electronic patients records from twelve secondary care UK Mental Health NHS Trusts (Goodday et al., 2020). UK-CRIS provides access to strucarXiv:2010.08433v2 [cs.CL] 24 Oct 2020 tured information, such as ICD-10 coded diagnoses, quality of life scales and demographic information, as well as various unstructured texts, such as clinical summaries, discharge letters and progress notes. The study cohort jointly comprised records from 24,108 patients diagnosed with Alzheimer\u2019s disease and various types of dementia, containing more than 3.7 million individual clinical documents from two centres: Oxford and Southern Health Foundation NHS Trusts. The \ufb01eld of clinical NLP in general, and of mental health and Alzheimer\u2019s research in particular, largely suffers from the dearth of gold-annotated data. The reason is due to the shortage of trained annotators with clinical background who are also authorised to access sensitive patient-level data. Therefore, to develop a robust information extraction (IE) model from an insuf\ufb01cient amount of data, we leveraged the idea of transfer learning using the publicly available MIMIC-III corpus (Johnson et al., 2016) comprising information relating to patients admitted to intensive care units (ICU) with more than 2.1 million clinical notes as well as 505 gold-annotated by clinical experts discharge summaries from the 2018 n2c2 challenge (Henry et al., 2020). We assert that the study was independently approved and granted by the Oxfordshire and Southern Health NHS Foundation Trust Research Ethics Committees. 2.2 Information extraction model The information extraction model was developed to identify diagnosis, medications and cognitive health assessment Mini-Mental State Examination score (MMSE) (Pangman et al., 2000). Additionally, the identi\ufb01ed entities were classi\ufb01ed according to several attributes, such as the \u2019experiencer\u2019 modality (i.e., whether the MMSE was actually referring to a patient or to a family member), temporal information (i.e the date of diagnosis or MMSE score) and negations (i.e. discontinued medications) (Harkema et al., 2009; Gligic et al., 2019). Such drug mentions were discarded in order to extract the most accurate information. Generic and brand drug names were normalised using the British National Formulary, the core pharmaceutical reference book (Committee et al., 2019). The architecture of the named entity recognition model comprised a hybrid approach of an ontologybased fuzzy pattern matching and a bi-directional LSTM neural network architecture with the attention mechanism (Bahdanau et al., 2014) for sequence classi\ufb01cation. The GloVE word embedding (Pennington et al., 2014) were \ufb01ne-tuned on both MIMIC-III and UK-CRIS data (Vaci et al., 2020a; Kormilitzin et al., 2020). The developed IE model was trained only on data from the Oxford Health NHS Trust instance and externally validated on a sample of data from a regionally different Southern Health NHS Foundation Trust. 2.3 The signature of a path Repeated measurements, speech, text, time-series or any other sequential data might be seen as a path-valued random variable. Formally, a path X of \ufb01nite length in d dimensions can be described by the mapping X : [a, b] \u2192Rd, or in terms of co-ordinates X = (X1 t , X2 t , ..., Xd t ), where each coordinate Xi t is real-valued and parametrised by t \u2208[a, b]. The signature representation S of a path X is de\ufb01ned as an in\ufb01nite series: S(X)a,b = (1,S(X)1 a,b, S(X)2 a,b, ..., S(X)d a,b, S(X)1,1 a,b, S(X)1,2 a,b, ...), (1) where each term is a k-fold iterated integral of the path X labelled by multi-index i1, ..., ik: S(X)i1,...,ik a,b = Z a